Test Report: QEMU_macOS 18793

                    
                      e5d92f0c4d7ea091f043b7a68a980727ecf8401d:2024-05-03:34314
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.12
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.99
27 TestAddons/Setup 10.26
28 TestCertOptions 10.16
29 TestCertExpiration 195.3
30 TestDockerFlags 10.34
31 TestForceSystemdFlag 10.28
32 TestForceSystemdEnv 10.18
38 TestErrorSpam/setup 9.91
47 TestFunctional/serial/StartWithProxy 10.01
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
61 TestFunctional/serial/MinikubeKubectlCmd 0.64
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.96
63 TestFunctional/serial/ExtraConfig 5.28
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.16
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.14
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 97.54
100 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
101 TestFunctional/parallel/ServiceCmd/List 0.05
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
104 TestFunctional/parallel/ServiceCmd/Format 0.05
105 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/Version/components 0.04
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
118 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.11
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
127 TestFunctional/parallel/DockerEnv/bash 0.05
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 26.09
141 TestMultiControlPlane/serial/StartCluster 10.08
142 TestMultiControlPlane/serial/DeployApp 106.84
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
150 TestMultiControlPlane/serial/RestartSecondaryNode 44.78
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.31
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
155 TestMultiControlPlane/serial/StopCluster 2.13
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
162 TestImageBuild/serial/Setup 9.92
165 TestJSONOutput/start/Command 9.87
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.34
197 TestMountStart/serial/StartWithMountFirst 10.21
200 TestMultiNode/serial/FreshStart2Nodes 10.24
201 TestMultiNode/serial/DeployApp2Nodes 95.35
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 57.51
209 TestMultiNode/serial/RestartKeepsNodes 8.32
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.57
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.13
217 TestPreload 10.01
219 TestScheduledStopUnix 10.16
220 TestSkaffold 12.57
223 TestRunningBinaryUpgrade 592.79
225 TestKubernetesUpgrade 18.8
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.21
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.97
241 TestStoppedBinaryUpgrade/Upgrade 576.51
243 TestPause/serial/Start 9.85
253 TestNoKubernetes/serial/StartWithK8s 9.86
254 TestNoKubernetes/serial/StartWithStopK8s 5.32
255 TestNoKubernetes/serial/Start 5.32
259 TestNoKubernetes/serial/StartNoArgs 5.33
261 TestNetworkPlugins/group/auto/Start 9.79
262 TestNetworkPlugins/group/flannel/Start 9.98
263 TestNetworkPlugins/group/kindnet/Start 9.86
264 TestNetworkPlugins/group/enable-default-cni/Start 9.86
265 TestNetworkPlugins/group/bridge/Start 9.78
266 TestNetworkPlugins/group/kubenet/Start 9.93
267 TestNetworkPlugins/group/custom-flannel/Start 9.87
268 TestNetworkPlugins/group/calico/Start 9.93
269 TestNetworkPlugins/group/false/Start 9.73
271 TestStartStop/group/old-k8s-version/serial/FirstStart 10.13
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.13
283 TestStartStop/group/no-preload/serial/FirstStart 9.86
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.26
290 TestStartStop/group/embed-certs/serial/FirstStart 9.97
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/no-preload/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.83
297 TestStartStop/group/embed-certs/serial/DeployApp 0.09
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
301 TestStartStop/group/embed-certs/serial/SecondStart 6.64
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.9
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (11.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-988000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-988000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.118758084s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0c370a9e-f007-4764-b50d-fea98a8215fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-988000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"46147c01-4014-4f47-b691-84cd9a408c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18793"}}
	{"specversion":"1.0","id":"bd15bf2f-e02c-4986-b290-5e178fba238c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig"}}
	{"specversion":"1.0","id":"73df29b7-69b3-4fb8-8e58-1d03591384ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"80c7ee6a-ebca-472c-9bad-05ce31a18e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6dd6ef14-b1e0-4245-9fa8-50b83faf8adc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube"}}
	{"specversion":"1.0","id":"bb2cbe2e-0a4f-42e9-b4aa-923266937119","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"3ef83b21-6714-4e6b-acbb-9bcb9425fa92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4de062e3-1354-4938-975d-39655b4d70c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f3bea3a6-b312-4b1d-b028-fc1f7ad8b73f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"075af746-6d8d-4806-b689-7266448a3ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-988000\" primary control-plane node in \"download-only-988000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"496689fc-9617-4f21-928d-41e33efdb1d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfd334a0-8e60-428e-b2c7-5f8e5cc9a793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00] Decompressors:map[bz2:0x14000765020 gz:0x14000765028 tar:0x14000764fd0 tar.bz2:0x14000764fe0 tar.gz:0x14000764ff0 tar.xz:0x14000765000 tar.zst:0x14000765010 tbz2:0x14000764fe0 tgz:0x14
000764ff0 txz:0x14000765000 tzst:0x14000765010 xz:0x14000765030 zip:0x14000765040 zst:0x14000765038] Getters:map[file:0x140021a4560 http:0x1400070a190 https:0x1400070a1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4483af22-91b0-4ed6-9f35-477a4d411b87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:02:45.891216    7770 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:02:45.891365    7770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:02:45.891369    7770 out.go:304] Setting ErrFile to fd 2...
	I0503 15:02:45.891371    7770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:02:45.891501    7770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	W0503 15:02:45.891572    7770 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18793-7269/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18793-7269/.minikube/config/config.json: no such file or directory
	I0503 15:02:45.892883    7770 out.go:298] Setting JSON to true
	I0503 15:02:45.909819    7770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3736,"bootTime":1714770029,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:02:45.909881    7770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:02:45.915674    7770 out.go:97] [download-only-988000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:02:45.919513    7770 out.go:169] MINIKUBE_LOCATION=18793
	I0503 15:02:45.915802    7770 notify.go:220] Checking for updates...
	W0503 15:02:45.915840    7770 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball: no such file or directory
	I0503 15:02:45.926858    7770 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:02:45.929614    7770 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:02:45.932613    7770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:02:45.935558    7770 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	W0503 15:02:45.941549    7770 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0503 15:02:45.941751    7770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:02:45.945510    7770 out.go:97] Using the qemu2 driver based on user configuration
	I0503 15:02:45.945529    7770 start.go:297] selected driver: qemu2
	I0503 15:02:45.945544    7770 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:02:45.945631    7770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:02:45.948542    7770 out.go:169] Automatically selected the socket_vmnet network
	I0503 15:02:45.952047    7770 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0503 15:02:45.952142    7770 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 15:02:45.952219    7770 cni.go:84] Creating CNI manager for ""
	I0503 15:02:45.952242    7770 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0503 15:02:45.952301    7770 start.go:340] cluster config:
	{Name:download-only-988000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-988000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:02:45.957186    7770 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:02:45.961533    7770 out.go:97] Downloading VM boot image ...
	I0503 15:02:45.961568    7770 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso
	I0503 15:02:50.349608    7770 out.go:97] Starting "download-only-988000" primary control-plane node in "download-only-988000" cluster
	I0503 15:02:50.349626    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:02:50.405734    7770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:02:50.405742    7770 cache.go:56] Caching tarball of preloaded images
	I0503 15:02:50.406028    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:02:50.410637    7770 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0503 15:02:50.410644    7770 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:50.493861    7770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:02:55.763961    7770 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:55.764096    7770 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:56.460696    7770 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0503 15:02:56.460882    7770 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/download-only-988000/config.json ...
	I0503 15:02:56.460898    7770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/download-only-988000/config.json: {Name:mk775e79f8473633e2d533f46469ccfa5d2255cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:02:56.461592    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:02:56.461869    7770 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0503 15:02:56.925369    7770 out.go:169] 
	W0503 15:02:56.931451    7770 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00] Decompressors:map[bz2:0x14000765020 gz:0x14000765028 tar:0x14000764fd0 tar.bz2:0x14000764fe0 tar.gz:0x14000764ff0 tar.xz:0x14000765000 tar.zst:0x14000765010 tbz2:0x14000764fe0 tgz:0x14000764ff0 txz:0x14000765000 tzst:0x14000765010 xz:0x14000765030 zip:0x14000765040 zst:0x14000765038] Getters:map[file:0x140021a4560 http:0x1400070a190 https:0x1400070a1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0503 15:02:56.931480    7770 out_reason.go:110] 
	W0503 15:02:56.943377    7770 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:02:56.947276    7770 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-988000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-308000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-308000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.811583209s)

                                                
                                                
-- stdout --
	* [offline-docker-308000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-308000" primary control-plane node in "offline-docker-308000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-308000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:14:11.905441    9353 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:14:11.905612    9353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:11.905616    9353 out.go:304] Setting ErrFile to fd 2...
	I0503 15:14:11.905618    9353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:11.905747    9353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:14:11.906958    9353 out.go:298] Setting JSON to false
	I0503 15:14:11.924452    9353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4422,"bootTime":1714770029,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:14:11.924557    9353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:14:11.929202    9353 out.go:177] * [offline-docker-308000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:14:11.937113    9353 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:14:11.937147    9353 notify.go:220] Checking for updates...
	I0503 15:14:11.943030    9353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:14:11.946092    9353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:14:11.949140    9353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:14:11.952050    9353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:14:11.955107    9353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:14:11.958449    9353 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:14:11.958505    9353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:14:11.962082    9353 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:14:11.969104    9353 start.go:297] selected driver: qemu2
	I0503 15:14:11.969113    9353 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:14:11.969121    9353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:14:11.971220    9353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:14:11.974110    9353 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:14:11.981155    9353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:14:11.981177    9353 cni.go:84] Creating CNI manager for ""
	I0503 15:14:11.981183    9353 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:14:11.981187    9353 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:14:11.981215    9353 start.go:340] cluster config:
	{Name:offline-docker-308000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-308000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:14:11.985814    9353 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:14:11.993077    9353 out.go:177] * Starting "offline-docker-308000" primary control-plane node in "offline-docker-308000" cluster
	I0503 15:14:11.997116    9353 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:14:11.997159    9353 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:14:11.997167    9353 cache.go:56] Caching tarball of preloaded images
	I0503 15:14:11.997270    9353 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:14:11.997276    9353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:14:11.997336    9353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/offline-docker-308000/config.json ...
	I0503 15:14:11.997347    9353 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/offline-docker-308000/config.json: {Name:mk2835e760af6370dcdc2e2cdfe72dca7a5ce3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:14:11.997656    9353 start.go:360] acquireMachinesLock for offline-docker-308000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:11.997698    9353 start.go:364] duration metric: took 32.834µs to acquireMachinesLock for "offline-docker-308000"
	I0503 15:14:11.997710    9353 start.go:93] Provisioning new machine with config: &{Name:offline-docker-308000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-308000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:11.997751    9353 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:12.006103    9353 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:12.021796    9353 start.go:159] libmachine.API.Create for "offline-docker-308000" (driver="qemu2")
	I0503 15:14:12.021837    9353 client.go:168] LocalClient.Create starting
	I0503 15:14:12.021917    9353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:12.021949    9353 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:12.021957    9353 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:12.022014    9353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:12.022037    9353 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:12.022043    9353 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:12.022392    9353 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:12.168657    9353 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:12.292381    9353 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:12.292389    9353 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:12.292557    9353 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2
	I0503 15:14:12.305587    9353 main.go:141] libmachine: STDOUT: 
	I0503 15:14:12.305612    9353 main.go:141] libmachine: STDERR: 
	I0503 15:14:12.305686    9353 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2 +20000M
	I0503 15:14:12.317451    9353 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:12.317479    9353 main.go:141] libmachine: STDERR: 
	I0503 15:14:12.317500    9353 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2
	I0503 15:14:12.317505    9353 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:12.317554    9353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3d:b5:d1:21:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2
	I0503 15:14:12.319091    9353 main.go:141] libmachine: STDOUT: 
	I0503 15:14:12.319109    9353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:12.319138    9353 client.go:171] duration metric: took 297.303583ms to LocalClient.Create
	I0503 15:14:14.320791    9353 start.go:128] duration metric: took 2.32308575s to createHost
	I0503 15:14:14.320805    9353 start.go:83] releasing machines lock for "offline-docker-308000", held for 2.323155792s
	W0503 15:14:14.320824    9353 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:14.325352    9353 out.go:177] * Deleting "offline-docker-308000" in qemu2 ...
	W0503 15:14:14.335162    9353 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:14.335174    9353 start.go:728] Will try again in 5 seconds ...
	I0503 15:14:19.337342    9353 start.go:360] acquireMachinesLock for offline-docker-308000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:19.337855    9353 start.go:364] duration metric: took 398.959µs to acquireMachinesLock for "offline-docker-308000"
	I0503 15:14:19.338004    9353 start.go:93] Provisioning new machine with config: &{Name:offline-docker-308000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-308000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:19.338273    9353 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:19.348698    9353 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:19.399262    9353 start.go:159] libmachine.API.Create for "offline-docker-308000" (driver="qemu2")
	I0503 15:14:19.399308    9353 client.go:168] LocalClient.Create starting
	I0503 15:14:19.399409    9353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:19.399469    9353 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:19.399488    9353 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:19.399554    9353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:19.399596    9353 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:19.399611    9353 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:19.400186    9353 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:19.556514    9353 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:19.612161    9353 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:19.612166    9353 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:19.612340    9353 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2
	I0503 15:14:19.624694    9353 main.go:141] libmachine: STDOUT: 
	I0503 15:14:19.624720    9353 main.go:141] libmachine: STDERR: 
	I0503 15:14:19.624775    9353 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2 +20000M
	I0503 15:14:19.635768    9353 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:19.635786    9353 main.go:141] libmachine: STDERR: 
	I0503 15:14:19.635810    9353 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2
	I0503 15:14:19.635815    9353 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:19.635849    9353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:c5:52:79:a0:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/offline-docker-308000/disk.qcow2
	I0503 15:14:19.637502    9353 main.go:141] libmachine: STDOUT: 
	I0503 15:14:19.637519    9353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:19.637530    9353 client.go:171] duration metric: took 238.221ms to LocalClient.Create
	I0503 15:14:21.639658    9353 start.go:128] duration metric: took 2.301406417s to createHost
	I0503 15:14:21.639700    9353 start.go:83] releasing machines lock for "offline-docker-308000", held for 2.301865625s
	W0503 15:14:21.640134    9353 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-308000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-308000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:21.653957    9353 out.go:177] 
	W0503 15:14:21.658122    9353 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:14:21.658163    9353 out.go:239] * 
	* 
	W0503 15:14:21.660848    9353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:14:21.670089    9353 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-308000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-05-03 15:14:21.687801 -0700 PDT m=+695.929524376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-308000 -n offline-docker-308000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-308000 -n offline-docker-308000: exit status 7 (70.294125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-308000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-308000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-308000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/Setup (10.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-379000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-379000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.261802875s)

                                                
                                                
-- stdout --
	* [addons-379000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-379000" primary control-plane node in "addons-379000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-379000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:03:05.442691    7879 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:03:05.442820    7879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:03:05.442824    7879 out.go:304] Setting ErrFile to fd 2...
	I0503 15:03:05.442826    7879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:03:05.442949    7879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:03:05.443974    7879 out.go:298] Setting JSON to false
	I0503 15:03:05.460165    7879 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3756,"bootTime":1714770029,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:03:05.460229    7879 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:03:05.463847    7879 out.go:177] * [addons-379000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:03:05.470845    7879 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:03:05.470904    7879 notify.go:220] Checking for updates...
	I0503 15:03:05.477761    7879 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:03:05.480859    7879 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:03:05.483831    7879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:03:05.486839    7879 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:03:05.489834    7879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:03:05.492961    7879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:03:05.496810    7879 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:03:05.502803    7879 start.go:297] selected driver: qemu2
	I0503 15:03:05.502811    7879 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:03:05.502818    7879 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:03:05.505036    7879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:03:05.507805    7879 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:03:05.510932    7879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:03:05.510981    7879 cni.go:84] Creating CNI manager for ""
	I0503 15:03:05.510989    7879 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:03:05.510993    7879 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:03:05.511025    7879 start.go:340] cluster config:
	{Name:addons-379000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:03:05.515458    7879 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:03:05.522870    7879 out.go:177] * Starting "addons-379000" primary control-plane node in "addons-379000" cluster
	I0503 15:03:05.526805    7879 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:03:05.526820    7879 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:03:05.526835    7879 cache.go:56] Caching tarball of preloaded images
	I0503 15:03:05.526899    7879 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:03:05.526909    7879 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:03:05.527113    7879 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/addons-379000/config.json ...
	I0503 15:03:05.527124    7879 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/addons-379000/config.json: {Name:mk684fdf5ce6d0d6afb75fd821c254aed122efca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:03:05.527482    7879 start.go:360] acquireMachinesLock for addons-379000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:03:05.527545    7879 start.go:364] duration metric: took 57.042µs to acquireMachinesLock for "addons-379000"
	I0503 15:03:05.527556    7879 start.go:93] Provisioning new machine with config: &{Name:addons-379000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:03:05.527582    7879 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:03:05.536841    7879 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0503 15:03:05.556872    7879 start.go:159] libmachine.API.Create for "addons-379000" (driver="qemu2")
	I0503 15:03:05.556942    7879 client.go:168] LocalClient.Create starting
	I0503 15:03:05.557099    7879 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:03:05.739041    7879 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:03:05.812524    7879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:03:06.054048    7879 main.go:141] libmachine: Creating SSH key...
	I0503 15:03:06.119326    7879 main.go:141] libmachine: Creating Disk image...
	I0503 15:03:06.119331    7879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:03:06.119500    7879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2
	I0503 15:03:06.132339    7879 main.go:141] libmachine: STDOUT: 
	I0503 15:03:06.132365    7879 main.go:141] libmachine: STDERR: 
	I0503 15:03:06.132459    7879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2 +20000M
	I0503 15:03:06.143444    7879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:03:06.143463    7879 main.go:141] libmachine: STDERR: 
	I0503 15:03:06.143473    7879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2
	I0503 15:03:06.143479    7879 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:03:06.143523    7879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:e8:37:c3:8a:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2
	I0503 15:03:06.145214    7879 main.go:141] libmachine: STDOUT: 
	I0503 15:03:06.145230    7879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:03:06.145248    7879 client.go:171] duration metric: took 588.302208ms to LocalClient.Create
	I0503 15:03:08.147472    7879 start.go:128] duration metric: took 2.61990625s to createHost
	I0503 15:03:08.147673    7879 start.go:83] releasing machines lock for "addons-379000", held for 2.620030333s
	W0503 15:03:08.147754    7879 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:03:08.158932    7879 out.go:177] * Deleting "addons-379000" in qemu2 ...
	W0503 15:03:08.189845    7879 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:03:08.189870    7879 start.go:728] Will try again in 5 seconds ...
	I0503 15:03:13.192083    7879 start.go:360] acquireMachinesLock for addons-379000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:03:13.192523    7879 start.go:364] duration metric: took 345.125µs to acquireMachinesLock for "addons-379000"
	I0503 15:03:13.193082    7879 start.go:93] Provisioning new machine with config: &{Name:addons-379000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:addons-379000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:03:13.193379    7879 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:03:13.203996    7879 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0503 15:03:13.252740    7879 start.go:159] libmachine.API.Create for "addons-379000" (driver="qemu2")
	I0503 15:03:13.252798    7879 client.go:168] LocalClient.Create starting
	I0503 15:03:13.252930    7879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:03:13.253007    7879 main.go:141] libmachine: Decoding PEM data...
	I0503 15:03:13.253022    7879 main.go:141] libmachine: Parsing certificate...
	I0503 15:03:13.253124    7879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:03:13.253171    7879 main.go:141] libmachine: Decoding PEM data...
	I0503 15:03:13.253183    7879 main.go:141] libmachine: Parsing certificate...
	I0503 15:03:13.253688    7879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:03:13.435944    7879 main.go:141] libmachine: Creating SSH key...
	I0503 15:03:13.602039    7879 main.go:141] libmachine: Creating Disk image...
	I0503 15:03:13.602046    7879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:03:13.602207    7879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2
	I0503 15:03:13.615260    7879 main.go:141] libmachine: STDOUT: 
	I0503 15:03:13.615279    7879 main.go:141] libmachine: STDERR: 
	I0503 15:03:13.615340    7879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2 +20000M
	I0503 15:03:13.626467    7879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:03:13.626484    7879 main.go:141] libmachine: STDERR: 
	I0503 15:03:13.626503    7879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2
	I0503 15:03:13.626507    7879 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:03:13.626542    7879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ea:1a:69:02:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/addons-379000/disk.qcow2
	I0503 15:03:13.628285    7879 main.go:141] libmachine: STDOUT: 
	I0503 15:03:13.628302    7879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:03:13.628316    7879 client.go:171] duration metric: took 375.518167ms to LocalClient.Create
	I0503 15:03:15.630460    7879 start.go:128] duration metric: took 2.437086208s to createHost
	I0503 15:03:15.630509    7879 start.go:83] releasing machines lock for "addons-379000", held for 2.437993s
	W0503 15:03:15.630818    7879 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-379000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-379000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:03:15.640432    7879 out.go:177] 
	W0503 15:03:15.646496    7879 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:03:15.646533    7879 out.go:239] * 
	* 
	W0503 15:03:15.649061    7879 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:03:15.658418    7879 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-379000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.26s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-277000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-277000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.870000166s)

                                                
                                                
-- stdout --
	* [cert-options-277000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-277000" primary control-plane node in "cert-options-277000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-277000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-277000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-277000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-277000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-277000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.297875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-277000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-277000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-277000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-277000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-277000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-277000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.173292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-277000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-277000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-277000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-277000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-277000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-05-03 15:14:52.402431 -0700 PDT m=+726.644859209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-277000 -n cert-options-277000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-277000 -n cert-options-277000: exit status 7 (32.093083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-277000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-277000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-277000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-807000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-807000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.892396084s)

                                                
                                                
-- stdout --
	* [cert-expiration-807000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-807000" primary control-plane node in "cert-expiration-807000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-807000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-807000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-807000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-807000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.23164075s)

                                                
                                                
-- stdout --
	* [cert-expiration-807000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-807000" primary control-plane node in "cert-expiration-807000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-807000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-807000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-807000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-807000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-807000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-807000" primary control-plane node in "cert-expiration-807000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-807000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-807000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-807000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-05-03 15:17:52.378329 -0700 PDT m=+906.624885668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-807000 -n cert-expiration-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-807000 -n cert-expiration-807000: exit status 7 (72.140708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-807000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-807000
--- FAIL: TestCertExpiration (195.30s)

                                                
                                    
x
+
TestDockerFlags (10.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-965000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-965000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.080837375s)

                                                
                                                
-- stdout --
	* [docker-flags-965000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-965000" primary control-plane node in "docker-flags-965000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-965000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:14:32.069437    9549 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:14:32.069573    9549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:32.069580    9549 out.go:304] Setting ErrFile to fd 2...
	I0503 15:14:32.069583    9549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:32.069710    9549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:14:32.070796    9549 out.go:298] Setting JSON to false
	I0503 15:14:32.086774    9549 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4443,"bootTime":1714770029,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:14:32.086841    9549 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:14:32.092858    9549 out.go:177] * [docker-flags-965000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:14:32.099796    9549 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:14:32.104826    9549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:14:32.099861    9549 notify.go:220] Checking for updates...
	I0503 15:14:32.110920    9549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:14:32.113876    9549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:14:32.116893    9549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:14:32.119771    9549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:14:32.123263    9549 config.go:182] Loaded profile config "force-systemd-flag-743000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:14:32.123330    9549 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:14:32.123375    9549 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:14:32.127901    9549 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:14:32.134874    9549 start.go:297] selected driver: qemu2
	I0503 15:14:32.134882    9549 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:14:32.134889    9549 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:14:32.137263    9549 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:14:32.140848    9549 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:14:32.142425    9549 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0503 15:14:32.142466    9549 cni.go:84] Creating CNI manager for ""
	I0503 15:14:32.142475    9549 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:14:32.142486    9549 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:14:32.142512    9549 start.go:340] cluster config:
	{Name:docker-flags-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:14:32.147113    9549 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:14:32.154895    9549 out.go:177] * Starting "docker-flags-965000" primary control-plane node in "docker-flags-965000" cluster
	I0503 15:14:32.158875    9549 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:14:32.158896    9549 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:14:32.158905    9549 cache.go:56] Caching tarball of preloaded images
	I0503 15:14:32.158977    9549 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:14:32.158991    9549 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:14:32.159045    9549 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/docker-flags-965000/config.json ...
	I0503 15:14:32.159058    9549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/docker-flags-965000/config.json: {Name:mk6968e58ab28e98b9a222e275a89c9290471d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:14:32.159290    9549 start.go:360] acquireMachinesLock for docker-flags-965000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:32.159327    9549 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "docker-flags-965000"
	I0503 15:14:32.159340    9549 start.go:93] Provisioning new machine with config: &{Name:docker-flags-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:32.159372    9549 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:32.167864    9549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:32.185895    9549 start.go:159] libmachine.API.Create for "docker-flags-965000" (driver="qemu2")
	I0503 15:14:32.185923    9549 client.go:168] LocalClient.Create starting
	I0503 15:14:32.185991    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:32.186020    9549 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:32.186035    9549 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:32.186070    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:32.186094    9549 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:32.186100    9549 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:32.186450    9549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:32.332721    9549 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:32.494357    9549 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:32.494363    9549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:32.494568    9549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2
	I0503 15:14:32.507442    9549 main.go:141] libmachine: STDOUT: 
	I0503 15:14:32.507466    9549 main.go:141] libmachine: STDERR: 
	I0503 15:14:32.507525    9549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2 +20000M
	I0503 15:14:32.518467    9549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:32.518479    9549 main.go:141] libmachine: STDERR: 
	I0503 15:14:32.518499    9549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2
	I0503 15:14:32.518504    9549 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:32.518535    9549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:90:50:eb:9b:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2
	I0503 15:14:32.520209    9549 main.go:141] libmachine: STDOUT: 
	I0503 15:14:32.520224    9549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:32.520244    9549 client.go:171] duration metric: took 334.325625ms to LocalClient.Create
	I0503 15:14:34.522367    9549 start.go:128] duration metric: took 2.363031708s to createHost
	I0503 15:14:34.522451    9549 start.go:83] releasing machines lock for "docker-flags-965000", held for 2.363167917s
	W0503 15:14:34.522504    9549 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:34.544227    9549 out.go:177] * Deleting "docker-flags-965000" in qemu2 ...
	W0503 15:14:34.564711    9549 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:34.564733    9549 start.go:728] Will try again in 5 seconds ...
	I0503 15:14:39.566814    9549 start.go:360] acquireMachinesLock for docker-flags-965000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:39.662739    9549 start.go:364] duration metric: took 95.773916ms to acquireMachinesLock for "docker-flags-965000"
	I0503 15:14:39.662892    9549 start.go:93] Provisioning new machine with config: &{Name:docker-flags-965000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:39.663177    9549 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:39.677842    9549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:39.726649    9549 start.go:159] libmachine.API.Create for "docker-flags-965000" (driver="qemu2")
	I0503 15:14:39.726691    9549 client.go:168] LocalClient.Create starting
	I0503 15:14:39.726804    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:39.726869    9549 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:39.726885    9549 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:39.726946    9549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:39.726988    9549 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:39.726999    9549 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:39.727665    9549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:39.885446    9549 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:40.049715    9549 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:40.049722    9549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:40.049925    9549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2
	I0503 15:14:40.062843    9549 main.go:141] libmachine: STDOUT: 
	I0503 15:14:40.062864    9549 main.go:141] libmachine: STDERR: 
	I0503 15:14:40.062914    9549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2 +20000M
	I0503 15:14:40.073765    9549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:40.073786    9549 main.go:141] libmachine: STDERR: 
	I0503 15:14:40.073801    9549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2
	I0503 15:14:40.073805    9549 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:40.073841    9549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:0b:ea:a2:82:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/docker-flags-965000/disk.qcow2
	I0503 15:14:40.075554    9549 main.go:141] libmachine: STDOUT: 
	I0503 15:14:40.075570    9549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:40.075581    9549 client.go:171] duration metric: took 348.891542ms to LocalClient.Create
	I0503 15:14:42.076971    9549 start.go:128] duration metric: took 2.413804s to createHost
	I0503 15:14:42.077242    9549 start.go:83] releasing machines lock for "docker-flags-965000", held for 2.414519958s
	W0503 15:14:42.077519    9549 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-965000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:42.088007    9549 out.go:177] 
	W0503 15:14:42.093126    9549 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:14:42.093160    9549 out.go:239] * 
	* 
	W0503 15:14:42.095660    9549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:14:42.105162    9549 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-965000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-965000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-965000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.152292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-965000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-965000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-965000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-965000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-965000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-965000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-965000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-965000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-965000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.644ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-965000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-965000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-965000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-965000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-965000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-965000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-05-03 15:14:42.248132 -0700 PDT m=+716.490327209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-965000 -n docker-flags-965000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-965000 -n docker-flags-965000: exit status 7 (30.881042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-965000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-965000
--- FAIL: TestDockerFlags (10.34s)

                                                
                                    
x
+
TestForceSystemdFlag (10.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-743000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-743000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.0626295s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-743000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-743000" primary control-plane node in "force-systemd-flag-743000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-743000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:14:27.000455    9527 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:14:27.000568    9527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:27.000571    9527 out.go:304] Setting ErrFile to fd 2...
	I0503 15:14:27.000573    9527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:27.000711    9527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:14:27.001790    9527 out.go:298] Setting JSON to false
	I0503 15:14:27.017761    9527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4438,"bootTime":1714770029,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:14:27.017824    9527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:14:27.023989    9527 out.go:177] * [force-systemd-flag-743000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:14:27.030843    9527 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:14:27.034948    9527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:14:27.030883    9527 notify.go:220] Checking for updates...
	I0503 15:14:27.041876    9527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:14:27.044845    9527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:14:27.047886    9527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:14:27.050939    9527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:14:27.054326    9527 config.go:182] Loaded profile config "force-systemd-env-955000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:14:27.054411    9527 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:14:27.054463    9527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:14:27.058874    9527 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:14:27.064773    9527 start.go:297] selected driver: qemu2
	I0503 15:14:27.064779    9527 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:14:27.064785    9527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:14:27.067105    9527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:14:27.070923    9527 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:14:27.073904    9527 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 15:14:27.073934    9527 cni.go:84] Creating CNI manager for ""
	I0503 15:14:27.073947    9527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:14:27.073951    9527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:14:27.073978    9527 start.go:340] cluster config:
	{Name:force-systemd-flag-743000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:14:27.078520    9527 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:14:27.085874    9527 out.go:177] * Starting "force-systemd-flag-743000" primary control-plane node in "force-systemd-flag-743000" cluster
	I0503 15:14:27.089885    9527 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:14:27.089908    9527 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:14:27.089921    9527 cache.go:56] Caching tarball of preloaded images
	I0503 15:14:27.089997    9527 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:14:27.090004    9527 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:14:27.090074    9527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/force-systemd-flag-743000/config.json ...
	I0503 15:14:27.090089    9527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/force-systemd-flag-743000/config.json: {Name:mk13ceb520c6cf83f97850ab038ceeb3dd75ac96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:14:27.090326    9527 start.go:360] acquireMachinesLock for force-systemd-flag-743000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:27.090364    9527 start.go:364] duration metric: took 29.542µs to acquireMachinesLock for "force-systemd-flag-743000"
	I0503 15:14:27.090378    9527 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:27.090413    9527 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:27.098903    9527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:27.116655    9527 start.go:159] libmachine.API.Create for "force-systemd-flag-743000" (driver="qemu2")
	I0503 15:14:27.116682    9527 client.go:168] LocalClient.Create starting
	I0503 15:14:27.116753    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:27.116788    9527 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:27.116798    9527 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:27.116840    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:27.116864    9527 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:27.116873    9527 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:27.117214    9527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:27.263024    9527 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:27.428607    9527 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:27.428613    9527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:27.428820    9527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0503 15:14:27.441959    9527 main.go:141] libmachine: STDOUT: 
	I0503 15:14:27.441979    9527 main.go:141] libmachine: STDERR: 
	I0503 15:14:27.442046    9527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2 +20000M
	I0503 15:14:27.452912    9527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:27.452944    9527 main.go:141] libmachine: STDERR: 
	I0503 15:14:27.452959    9527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0503 15:14:27.452962    9527 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:27.452993    9527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:3b:d7:ad:34:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0503 15:14:27.454699    9527 main.go:141] libmachine: STDOUT: 
	I0503 15:14:27.454721    9527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:27.454739    9527 client.go:171] duration metric: took 338.059792ms to LocalClient.Create
	I0503 15:14:29.456877    9527 start.go:128] duration metric: took 2.366500458s to createHost
	I0503 15:14:29.456905    9527 start.go:83] releasing machines lock for "force-systemd-flag-743000", held for 2.36658575s
	W0503 15:14:29.456960    9527 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:29.469143    9527 out.go:177] * Deleting "force-systemd-flag-743000" in qemu2 ...
	W0503 15:14:29.499841    9527 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:29.499880    9527 start.go:728] Will try again in 5 seconds ...
	I0503 15:14:34.501941    9527 start.go:360] acquireMachinesLock for force-systemd-flag-743000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:34.522608    9527 start.go:364] duration metric: took 20.5315ms to acquireMachinesLock for "force-systemd-flag-743000"
	I0503 15:14:34.522716    9527 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-743000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-743000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:34.522932    9527 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:34.532227    9527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:34.581015    9527 start.go:159] libmachine.API.Create for "force-systemd-flag-743000" (driver="qemu2")
	I0503 15:14:34.581060    9527 client.go:168] LocalClient.Create starting
	I0503 15:14:34.581185    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:34.581243    9527 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:34.581259    9527 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:34.581318    9527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:34.581361    9527 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:34.581374    9527 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:34.582101    9527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:34.739560    9527 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:34.949167    9527 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:34.949175    9527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:34.949392    9527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0503 15:14:34.962893    9527 main.go:141] libmachine: STDOUT: 
	I0503 15:14:34.962927    9527 main.go:141] libmachine: STDERR: 
	I0503 15:14:34.962981    9527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2 +20000M
	I0503 15:14:34.973901    9527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:34.973919    9527 main.go:141] libmachine: STDERR: 
	I0503 15:14:34.973931    9527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0503 15:14:34.973936    9527 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:34.973968    9527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:ff:02:5c:df:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-flag-743000/disk.qcow2
	I0503 15:14:34.975626    9527 main.go:141] libmachine: STDOUT: 
	I0503 15:14:34.975645    9527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:34.975657    9527 client.go:171] duration metric: took 394.599708ms to LocalClient.Create
	I0503 15:14:36.977796    9527 start.go:128] duration metric: took 2.454866583s to createHost
	I0503 15:14:36.977853    9527 start.go:83] releasing machines lock for "force-systemd-flag-743000", held for 2.455275875s
	W0503 15:14:36.978223    9527 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-743000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-743000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:36.997673    9527 out.go:177] 
	W0503 15:14:37.001736    9527 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:14:37.001759    9527 out.go:239] * 
	* 
	W0503 15:14:37.004160    9527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:14:37.017694    9527 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-743000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-743000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-743000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.878042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-743000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-743000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-743000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-05-03 15:14:37.118694 -0700 PDT m=+711.360771501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-743000 -n force-systemd-flag-743000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-743000 -n force-systemd-flag-743000: exit status 7 (36.1285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-743000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-743000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-743000
--- FAIL: TestForceSystemdFlag (10.28s)

                                                
                                    
x
+
TestForceSystemdEnv (10.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-955000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-955000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.952766s)

                                                
                                                
-- stdout --
	* [force-systemd-env-955000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-955000" primary control-plane node in "force-systemd-env-955000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-955000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:14:21.895256    9495 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:14:21.895395    9495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:21.895399    9495 out.go:304] Setting ErrFile to fd 2...
	I0503 15:14:21.895405    9495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:14:21.895526    9495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:14:21.896628    9495 out.go:298] Setting JSON to false
	I0503 15:14:21.913910    9495 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4432,"bootTime":1714770029,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:14:21.913977    9495 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:14:21.926934    9495 out.go:177] * [force-systemd-env-955000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:14:21.938921    9495 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:14:21.935011    9495 notify.go:220] Checking for updates...
	I0503 15:14:21.951917    9495 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:14:21.959971    9495 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:14:21.965939    9495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:14:21.973235    9495 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:14:21.975874    9495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0503 15:14:21.979278    9495 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:14:21.979328    9495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:14:21.982996    9495 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:14:21.989921    9495 start.go:297] selected driver: qemu2
	I0503 15:14:21.989928    9495 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:14:21.989934    9495 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:14:21.992023    9495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:14:21.995949    9495 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:14:21.998989    9495 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 15:14:21.999027    9495 cni.go:84] Creating CNI manager for ""
	I0503 15:14:21.999035    9495 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:14:21.999040    9495 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:14:21.999076    9495 start.go:340] cluster config:
	{Name:force-systemd-env-955000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-955000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:14:22.003239    9495 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:14:22.007165    9495 out.go:177] * Starting "force-systemd-env-955000" primary control-plane node in "force-systemd-env-955000" cluster
	I0503 15:14:22.010895    9495 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:14:22.010916    9495 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:14:22.010922    9495 cache.go:56] Caching tarball of preloaded images
	I0503 15:14:22.010973    9495 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:14:22.010977    9495 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:14:22.011030    9495 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/force-systemd-env-955000/config.json ...
	I0503 15:14:22.011040    9495 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/force-systemd-env-955000/config.json: {Name:mkc7750955ced278c48f5f0c59c942e61a133d99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:14:22.011409    9495 start.go:360] acquireMachinesLock for force-systemd-env-955000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:22.011441    9495 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "force-systemd-env-955000"
	I0503 15:14:22.011451    9495 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-955000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-955000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:22.011473    9495 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:22.018914    9495 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:22.034154    9495 start.go:159] libmachine.API.Create for "force-systemd-env-955000" (driver="qemu2")
	I0503 15:14:22.034183    9495 client.go:168] LocalClient.Create starting
	I0503 15:14:22.034248    9495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:22.034278    9495 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:22.034290    9495 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:22.034328    9495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:22.034351    9495 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:22.034359    9495 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:22.034794    9495 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:22.177383    9495 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:22.368382    9495 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:22.368395    9495 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:22.368592    9495 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2
	I0503 15:14:22.381814    9495 main.go:141] libmachine: STDOUT: 
	I0503 15:14:22.381840    9495 main.go:141] libmachine: STDERR: 
	I0503 15:14:22.381901    9495 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2 +20000M
	I0503 15:14:22.393589    9495 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:22.393610    9495 main.go:141] libmachine: STDERR: 
	I0503 15:14:22.393646    9495 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2
	I0503 15:14:22.393653    9495 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:22.393681    9495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:3b:d8:3d:de:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2
	I0503 15:14:22.395492    9495 main.go:141] libmachine: STDOUT: 
	I0503 15:14:22.395509    9495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:22.395528    9495 client.go:171] duration metric: took 361.347417ms to LocalClient.Create
	I0503 15:14:24.397863    9495 start.go:128] duration metric: took 2.386394458s to createHost
	I0503 15:14:24.397980    9495 start.go:83] releasing machines lock for "force-systemd-env-955000", held for 2.386584917s
	W0503 15:14:24.398041    9495 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:24.408244    9495 out.go:177] * Deleting "force-systemd-env-955000" in qemu2 ...
	W0503 15:14:24.436746    9495 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:24.436772    9495 start.go:728] Will try again in 5 seconds ...
	I0503 15:14:29.438829    9495 start.go:360] acquireMachinesLock for force-systemd-env-955000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:14:29.457063    9495 start.go:364] duration metric: took 18.154958ms to acquireMachinesLock for "force-systemd-env-955000"
	I0503 15:14:29.457143    9495 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-955000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-955000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:14:29.457409    9495 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:14:29.478455    9495 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0503 15:14:29.514403    9495 start.go:159] libmachine.API.Create for "force-systemd-env-955000" (driver="qemu2")
	I0503 15:14:29.514438    9495 client.go:168] LocalClient.Create starting
	I0503 15:14:29.514549    9495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:14:29.514623    9495 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:29.514638    9495 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:29.514692    9495 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:14:29.514731    9495 main.go:141] libmachine: Decoding PEM data...
	I0503 15:14:29.514744    9495 main.go:141] libmachine: Parsing certificate...
	I0503 15:14:29.515198    9495 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:14:29.667542    9495 main.go:141] libmachine: Creating SSH key...
	I0503 15:14:29.735409    9495 main.go:141] libmachine: Creating Disk image...
	I0503 15:14:29.735417    9495 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:14:29.735582    9495 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2
	I0503 15:14:29.748008    9495 main.go:141] libmachine: STDOUT: 
	I0503 15:14:29.748040    9495 main.go:141] libmachine: STDERR: 
	I0503 15:14:29.748105    9495 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2 +20000M
	I0503 15:14:29.758961    9495 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:14:29.758977    9495 main.go:141] libmachine: STDERR: 
	I0503 15:14:29.758998    9495 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2
	I0503 15:14:29.759002    9495 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:14:29.759043    9495 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:23:12:4f:ca:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/force-systemd-env-955000/disk.qcow2
	I0503 15:14:29.760696    9495 main.go:141] libmachine: STDOUT: 
	I0503 15:14:29.760710    9495 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:14:29.760726    9495 client.go:171] duration metric: took 246.286833ms to LocalClient.Create
	I0503 15:14:31.762856    9495 start.go:128] duration metric: took 2.305472542s to createHost
	I0503 15:14:31.762913    9495 start.go:83] releasing machines lock for "force-systemd-env-955000", held for 2.30587575s
	W0503 15:14:31.763295    9495 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-955000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-955000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:14:31.775985    9495 out.go:177] 
	W0503 15:14:31.786219    9495 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:14:31.786250    9495 out.go:239] * 
	* 
	W0503 15:14:31.788652    9495 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:14:31.798875    9495 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-955000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-955000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-955000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.078625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-955000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-955000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-955000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-05-03 15:14:31.90015 -0700 PDT m=+706.142108084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-955000 -n force-systemd-env-955000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-955000 -n force-systemd-env-955000: exit status 7 (35.713916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-955000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-955000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-955000
--- FAIL: TestForceSystemdEnv (10.18s)

                                                
                                    
x
+
TestErrorSpam/setup (9.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-618000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-618000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 --driver=qemu2 : exit status 80 (9.905513666s)

                                                
                                                
-- stdout --
	* [nospam-618000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-618000" primary control-plane node in "nospam-618000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-618000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-618000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-618000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-618000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-618000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18793
- KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-618000" primary control-plane node in "nospam-618000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-618000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-618000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.91s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-353000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-353000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.930038084s)

                                                
                                                
-- stdout --
	* [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-353000" primary control-plane node in "functional-353000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-353000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50991 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50991 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50991 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-353000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=18793
- KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-353000" primary control-plane node in "functional-353000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-353000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50991 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50991 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50991 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (76.176875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-353000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-353000 --alsologtostderr -v=8: exit status 80 (5.188062792s)

                                                
                                                
-- stdout --
	* [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-353000" primary control-plane node in "functional-353000" cluster
	* Restarting existing qemu2 VM for "functional-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:03:47.528111    8033 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:03:47.528232    8033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:03:47.528235    8033 out.go:304] Setting ErrFile to fd 2...
	I0503 15:03:47.528237    8033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:03:47.528364    8033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:03:47.529338    8033 out.go:298] Setting JSON to false
	I0503 15:03:47.545477    8033 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3798,"bootTime":1714770029,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:03:47.545538    8033 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:03:47.550919    8033 out.go:177] * [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:03:47.557843    8033 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:03:47.561804    8033 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:03:47.557901    8033 notify.go:220] Checking for updates...
	I0503 15:03:47.567731    8033 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:03:47.570864    8033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:03:47.573706    8033 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:03:47.576784    8033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:03:47.580050    8033 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:03:47.580109    8033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:03:47.583750    8033 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:03:47.590752    8033 start.go:297] selected driver: qemu2
	I0503 15:03:47.590759    8033 start.go:901] validating driver "qemu2" against &{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:03:47.590811    8033 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:03:47.593161    8033 cni.go:84] Creating CNI manager for ""
	I0503 15:03:47.593179    8033 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:03:47.593226    8033 start.go:340] cluster config:
	{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:03:47.597471    8033 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:03:47.604743    8033 out.go:177] * Starting "functional-353000" primary control-plane node in "functional-353000" cluster
	I0503 15:03:47.608788    8033 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:03:47.608801    8033 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:03:47.608809    8033 cache.go:56] Caching tarball of preloaded images
	I0503 15:03:47.608861    8033 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:03:47.608866    8033 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:03:47.608911    8033 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/functional-353000/config.json ...
	I0503 15:03:47.609373    8033 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:03:47.609400    8033 start.go:364] duration metric: took 21.459µs to acquireMachinesLock for "functional-353000"
	I0503 15:03:47.609409    8033 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:03:47.609415    8033 fix.go:54] fixHost starting: 
	I0503 15:03:47.609524    8033 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
	W0503 15:03:47.609532    8033 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:03:47.613771    8033 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
	I0503 15:03:47.621746    8033 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
	I0503 15:03:47.623808    8033 main.go:141] libmachine: STDOUT: 
	I0503 15:03:47.623827    8033 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:03:47.623853    8033 fix.go:56] duration metric: took 14.438ms for fixHost
	I0503 15:03:47.623858    8033 start.go:83] releasing machines lock for "functional-353000", held for 14.4545ms
	W0503 15:03:47.623864    8033 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:03:47.623902    8033 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:03:47.623906    8033 start.go:728] Will try again in 5 seconds ...
	I0503 15:03:52.626007    8033 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:03:52.626336    8033 start.go:364] duration metric: took 264.917µs to acquireMachinesLock for "functional-353000"
	I0503 15:03:52.626484    8033 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:03:52.626536    8033 fix.go:54] fixHost starting: 
	I0503 15:03:52.627218    8033 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
	W0503 15:03:52.627244    8033 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:03:52.635487    8033 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
	I0503 15:03:52.639607    8033 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
	I0503 15:03:52.648329    8033 main.go:141] libmachine: STDOUT: 
	I0503 15:03:52.648383    8033 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:03:52.648442    8033 fix.go:56] duration metric: took 21.930625ms for fixHost
	I0503 15:03:52.648463    8033 start.go:83] releasing machines lock for "functional-353000", held for 22.104458ms
	W0503 15:03:52.648648    8033 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:03:52.655530    8033 out.go:177] 
	W0503 15:03:52.659607    8033 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:03:52.659632    8033 out.go:239] * 
	* 
	W0503 15:03:52.662405    8033 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:03:52.669526    8033 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-353000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.18971425s for "functional-353000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (69.922959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.464542ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-353000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.495625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-353000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-353000 get po -A: exit status 1 (26.418834ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-353000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-353000\n"*: args "kubectl --context functional-353000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-353000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.773292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl images: exit status 83 (43.934834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.848791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-353000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.806584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.053625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-353000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 kubectl -- --context functional-353000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 kubectl -- --context functional-353000 get pods: exit status 1 (604.641042ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-353000
	* no server found for cluster "functional-353000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-353000 kubectl -- --context functional-353000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.941834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-353000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-353000 get pods: exit status 1 (922.976708ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-353000
	* no server found for cluster "functional-353000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-353000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.050708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-353000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-353000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.20666075s)

                                                
                                                
-- stdout --
	* [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-353000" primary control-plane node in "functional-353000" cluster
	* Restarting existing qemu2 VM for "functional-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-353000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.207219583s for "functional-353000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (70.687542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-353000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-353000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.016375ms)

                                                
                                                
** stderr ** 
	error: context "functional-353000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-353000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.324291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 logs: exit status 83 (78.869375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
	|         | -p download-only-988000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
	| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
	| start   | -o=json --download-only                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
	|         | -p download-only-819000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| delete  | -p download-only-819000                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| delete  | -p download-only-819000                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| start   | --download-only -p                                                       | binary-mirror-919000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | binary-mirror-919000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50954                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-919000                                                  | binary-mirror-919000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| addons  | enable dashboard -p                                                      | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | addons-379000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | addons-379000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-379000 --wait=true                                             | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-379000                                                         | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| start   | -p nospam-618000 -n=1 --memory=2250 --wait=false                         | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-618000                                                         | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | minikube-local-cache-test:functional-353000                              |                      |         |         |                     |                     |
	| cache   | functional-353000 cache delete                                           | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | minikube-local-cache-test:functional-353000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| ssh     | functional-353000 ssh sudo                                               | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-353000                                                        | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-353000 ssh                                                    | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-353000 cache reload                                           | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	| ssh     | functional-353000 ssh                                                    | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-353000 kubectl --                                             | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | --context functional-353000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 15:03:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 15:03:59.266784    8119 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:03:59.266886    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:03:59.266888    8119 out.go:304] Setting ErrFile to fd 2...
	I0503 15:03:59.266898    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:03:59.267017    8119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:03:59.267999    8119 out.go:298] Setting JSON to false
	I0503 15:03:59.283998    8119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3810,"bootTime":1714770029,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:03:59.284066    8119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:03:59.291159    8119 out.go:177] * [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:03:59.300156    8119 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:03:59.304110    8119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:03:59.300209    8119 notify.go:220] Checking for updates...
	I0503 15:03:59.312078    8119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:03:59.315116    8119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:03:59.318086    8119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:03:59.321048    8119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:03:59.324449    8119 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:03:59.324508    8119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:03:59.329110    8119 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:03:59.338169    8119 start.go:297] selected driver: qemu2
	I0503 15:03:59.338174    8119 start.go:901] validating driver "qemu2" against &{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:03:59.338252    8119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:03:59.340536    8119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:03:59.340593    8119 cni.go:84] Creating CNI manager for ""
	I0503 15:03:59.340600    8119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:03:59.340652    8119 start.go:340] cluster config:
	{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:03:59.345357    8119 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:03:59.354105    8119 out.go:177] * Starting "functional-353000" primary control-plane node in "functional-353000" cluster
	I0503 15:03:59.361148    8119 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:03:59.361164    8119 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:03:59.361174    8119 cache.go:56] Caching tarball of preloaded images
	I0503 15:03:59.361242    8119 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:03:59.361247    8119 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:03:59.361304    8119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/functional-353000/config.json ...
	I0503 15:03:59.361784    8119 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:03:59.361833    8119 start.go:364] duration metric: took 44.416µs to acquireMachinesLock for "functional-353000"
	I0503 15:03:59.361840    8119 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:03:59.361844    8119 fix.go:54] fixHost starting: 
	I0503 15:03:59.361957    8119 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
	W0503 15:03:59.361964    8119 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:03:59.373132    8119 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
	I0503 15:03:59.377029    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
	I0503 15:03:59.379176    8119 main.go:141] libmachine: STDOUT: 
	I0503 15:03:59.379192    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:03:59.379220    8119 fix.go:56] duration metric: took 17.37675ms for fixHost
	I0503 15:03:59.379223    8119 start.go:83] releasing machines lock for "functional-353000", held for 17.387ms
	W0503 15:03:59.379231    8119 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:03:59.379268    8119 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:03:59.379272    8119 start.go:728] Will try again in 5 seconds ...
	I0503 15:04:04.381432    8119 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:04:04.381921    8119 start.go:364] duration metric: took 360.292µs to acquireMachinesLock for "functional-353000"
	I0503 15:04:04.382053    8119 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:04:04.382068    8119 fix.go:54] fixHost starting: 
	I0503 15:04:04.382843    8119 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
	W0503 15:04:04.382862    8119 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:04:04.392226    8119 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
	I0503 15:04:04.396435    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
	I0503 15:04:04.406288    8119 main.go:141] libmachine: STDOUT: 
	I0503 15:04:04.406328    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:04:04.406402    8119 fix.go:56] duration metric: took 24.338375ms for fixHost
	I0503 15:04:04.406414    8119 start.go:83] releasing machines lock for "functional-353000", held for 24.479583ms
	W0503 15:04:04.406549    8119 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:04:04.414230    8119 out.go:177] 
	W0503 15:04:04.418312    8119 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:04:04.418345    8119 out.go:239] * 
	W0503 15:04:04.421102    8119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:04:04.429083    8119 out.go:177] 
	
	
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-353000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
|         | -p download-only-988000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
| start   | -o=json --download-only                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
|         | -p download-only-819000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| delete  | -p download-only-819000                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| delete  | -p download-only-819000                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| start   | --download-only -p                                                       | binary-mirror-919000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | binary-mirror-919000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50954                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-919000                                                  | binary-mirror-919000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| addons  | enable dashboard -p                                                      | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | addons-379000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | addons-379000                                                            |                      |         |         |                     |                     |
| start   | -p addons-379000 --wait=true                                             | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-379000                                                         | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| start   | -p nospam-618000 -n=1 --memory=2250 --wait=false                         | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-618000                                                         | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | minikube-local-cache-test:functional-353000                              |                      |         |         |                     |                     |
| cache   | functional-353000 cache delete                                           | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | minikube-local-cache-test:functional-353000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| ssh     | functional-353000 ssh sudo                                               | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-353000                                                        | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-353000 ssh                                                    | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-353000 cache reload                                           | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| ssh     | functional-353000 ssh                                                    | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-353000 kubectl --                                             | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --context functional-353000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/03 15:03:59
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0503 15:03:59.266784    8119 out.go:291] Setting OutFile to fd 1 ...
I0503 15:03:59.266886    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:03:59.266888    8119 out.go:304] Setting ErrFile to fd 2...
I0503 15:03:59.266898    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:03:59.267017    8119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:03:59.267999    8119 out.go:298] Setting JSON to false
I0503 15:03:59.283998    8119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3810,"bootTime":1714770029,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0503 15:03:59.284066    8119 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0503 15:03:59.291159    8119 out.go:177] * [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
I0503 15:03:59.300156    8119 out.go:177]   - MINIKUBE_LOCATION=18793
I0503 15:03:59.304110    8119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
I0503 15:03:59.300209    8119 notify.go:220] Checking for updates...
I0503 15:03:59.312078    8119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0503 15:03:59.315116    8119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0503 15:03:59.318086    8119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
I0503 15:03:59.321048    8119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0503 15:03:59.324449    8119 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:03:59.324508    8119 driver.go:392] Setting default libvirt URI to qemu:///system
I0503 15:03:59.329110    8119 out.go:177] * Using the qemu2 driver based on existing profile
I0503 15:03:59.338169    8119 start.go:297] selected driver: qemu2
I0503 15:03:59.338174    8119 start.go:901] validating driver "qemu2" against &{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0503 15:03:59.338252    8119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0503 15:03:59.340536    8119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0503 15:03:59.340593    8119 cni.go:84] Creating CNI manager for ""
I0503 15:03:59.340600    8119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0503 15:03:59.340652    8119 start.go:340] cluster config:
{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0503 15:03:59.345357    8119 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0503 15:03:59.354105    8119 out.go:177] * Starting "functional-353000" primary control-plane node in "functional-353000" cluster
I0503 15:03:59.361148    8119 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0503 15:03:59.361164    8119 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0503 15:03:59.361174    8119 cache.go:56] Caching tarball of preloaded images
I0503 15:03:59.361242    8119 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0503 15:03:59.361247    8119 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0503 15:03:59.361304    8119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/functional-353000/config.json ...
I0503 15:03:59.361784    8119 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0503 15:03:59.361833    8119 start.go:364] duration metric: took 44.416µs to acquireMachinesLock for "functional-353000"
I0503 15:03:59.361840    8119 start.go:96] Skipping create...Using existing machine configuration
I0503 15:03:59.361844    8119 fix.go:54] fixHost starting: 
I0503 15:03:59.361957    8119 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
W0503 15:03:59.361964    8119 fix.go:138] unexpected machine state, will restart: <nil>
I0503 15:03:59.373132    8119 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
I0503 15:03:59.377029    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
I0503 15:03:59.379176    8119 main.go:141] libmachine: STDOUT: 
I0503 15:03:59.379192    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0503 15:03:59.379220    8119 fix.go:56] duration metric: took 17.37675ms for fixHost
I0503 15:03:59.379223    8119 start.go:83] releasing machines lock for "functional-353000", held for 17.387ms
W0503 15:03:59.379231    8119 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0503 15:03:59.379268    8119 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0503 15:03:59.379272    8119 start.go:728] Will try again in 5 seconds ...
I0503 15:04:04.381432    8119 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0503 15:04:04.381921    8119 start.go:364] duration metric: took 360.292µs to acquireMachinesLock for "functional-353000"
I0503 15:04:04.382053    8119 start.go:96] Skipping create...Using existing machine configuration
I0503 15:04:04.382068    8119 fix.go:54] fixHost starting: 
I0503 15:04:04.382843    8119 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
W0503 15:04:04.382862    8119 fix.go:138] unexpected machine state, will restart: <nil>
I0503 15:04:04.392226    8119 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
I0503 15:04:04.396435    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
I0503 15:04:04.406288    8119 main.go:141] libmachine: STDOUT: 
I0503 15:04:04.406328    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0503 15:04:04.406402    8119 fix.go:56] duration metric: took 24.338375ms for fixHost
I0503 15:04:04.406414    8119 start.go:83] releasing machines lock for "functional-353000", held for 24.479583ms
W0503 15:04:04.406549    8119 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0503 15:04:04.414230    8119 out.go:177] 
W0503 15:04:04.418312    8119 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0503 15:04:04.418345    8119 out.go:239] * 
W0503 15:04:04.421102    8119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0503 15:04:04.429083    8119 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1083762039/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
|         | -p download-only-988000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
| start   | -o=json --download-only                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
|         | -p download-only-819000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| delete  | -p download-only-819000                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| delete  | -p download-only-988000                                                  | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| delete  | -p download-only-819000                                                  | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| start   | --download-only -p                                                       | binary-mirror-919000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | binary-mirror-919000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50954                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-919000                                                  | binary-mirror-919000 | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| addons  | enable dashboard -p                                                      | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | addons-379000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | addons-379000                                                            |                      |         |         |                     |                     |
| start   | -p addons-379000 --wait=true                                             | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-379000                                                         | addons-379000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| start   | -p nospam-618000 -n=1 --memory=2250 --wait=false                         | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-618000 --log_dir                                                  | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-618000                                                         | nospam-618000        | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-353000 cache add                                              | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | minikube-local-cache-test:functional-353000                              |                      |         |         |                     |                     |
| cache   | functional-353000 cache delete                                           | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | minikube-local-cache-test:functional-353000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| ssh     | functional-353000 ssh sudo                                               | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-353000                                                        | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-353000 ssh                                                    | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-353000 cache reload                                           | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
| ssh     | functional-353000 ssh                                                    | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0 | 03 May 24 15:03 PDT | 03 May 24 15:03 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-353000 kubectl --                                             | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --context functional-353000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-353000                                                     | functional-353000    | jenkins | v1.33.0 | 03 May 24 15:03 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/05/03 15:03:59
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0503 15:03:59.266784    8119 out.go:291] Setting OutFile to fd 1 ...
I0503 15:03:59.266886    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:03:59.266888    8119 out.go:304] Setting ErrFile to fd 2...
I0503 15:03:59.266898    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:03:59.267017    8119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:03:59.267999    8119 out.go:298] Setting JSON to false
I0503 15:03:59.283998    8119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3810,"bootTime":1714770029,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0503 15:03:59.284066    8119 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0503 15:03:59.291159    8119 out.go:177] * [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
I0503 15:03:59.300156    8119 out.go:177]   - MINIKUBE_LOCATION=18793
I0503 15:03:59.304110    8119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
I0503 15:03:59.300209    8119 notify.go:220] Checking for updates...
I0503 15:03:59.312078    8119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0503 15:03:59.315116    8119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0503 15:03:59.318086    8119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
I0503 15:03:59.321048    8119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0503 15:03:59.324449    8119 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:03:59.324508    8119 driver.go:392] Setting default libvirt URI to qemu:///system
I0503 15:03:59.329110    8119 out.go:177] * Using the qemu2 driver based on existing profile
I0503 15:03:59.338169    8119 start.go:297] selected driver: qemu2
I0503 15:03:59.338174    8119 start.go:901] validating driver "qemu2" against &{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0503 15:03:59.338252    8119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0503 15:03:59.340536    8119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0503 15:03:59.340593    8119 cni.go:84] Creating CNI manager for ""
I0503 15:03:59.340600    8119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0503 15:03:59.340652    8119 start.go:340] cluster config:
{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0503 15:03:59.345357    8119 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0503 15:03:59.354105    8119 out.go:177] * Starting "functional-353000" primary control-plane node in "functional-353000" cluster
I0503 15:03:59.361148    8119 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0503 15:03:59.361164    8119 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
I0503 15:03:59.361174    8119 cache.go:56] Caching tarball of preloaded images
I0503 15:03:59.361242    8119 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0503 15:03:59.361247    8119 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0503 15:03:59.361304    8119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/functional-353000/config.json ...
I0503 15:03:59.361784    8119 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0503 15:03:59.361833    8119 start.go:364] duration metric: took 44.416µs to acquireMachinesLock for "functional-353000"
I0503 15:03:59.361840    8119 start.go:96] Skipping create...Using existing machine configuration
I0503 15:03:59.361844    8119 fix.go:54] fixHost starting: 
I0503 15:03:59.361957    8119 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
W0503 15:03:59.361964    8119 fix.go:138] unexpected machine state, will restart: <nil>
I0503 15:03:59.373132    8119 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
I0503 15:03:59.377029    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
I0503 15:03:59.379176    8119 main.go:141] libmachine: STDOUT: 
I0503 15:03:59.379192    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0503 15:03:59.379220    8119 fix.go:56] duration metric: took 17.37675ms for fixHost
I0503 15:03:59.379223    8119 start.go:83] releasing machines lock for "functional-353000", held for 17.387ms
W0503 15:03:59.379231    8119 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0503 15:03:59.379268    8119 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0503 15:03:59.379272    8119 start.go:728] Will try again in 5 seconds ...
I0503 15:04:04.381432    8119 start.go:360] acquireMachinesLock for functional-353000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0503 15:04:04.381921    8119 start.go:364] duration metric: took 360.292µs to acquireMachinesLock for "functional-353000"
I0503 15:04:04.382053    8119 start.go:96] Skipping create...Using existing machine configuration
I0503 15:04:04.382068    8119 fix.go:54] fixHost starting: 
I0503 15:04:04.382843    8119 fix.go:112] recreateIfNeeded on functional-353000: state=Stopped err=<nil>
W0503 15:04:04.382862    8119 fix.go:138] unexpected machine state, will restart: <nil>
I0503 15:04:04.392226    8119 out.go:177] * Restarting existing qemu2 VM for "functional-353000" ...
I0503 15:04:04.396435    8119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:7b:c3:60:93:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/functional-353000/disk.qcow2
I0503 15:04:04.406288    8119 main.go:141] libmachine: STDOUT: 
I0503 15:04:04.406328    8119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0503 15:04:04.406402    8119 fix.go:56] duration metric: took 24.338375ms for fixHost
I0503 15:04:04.406414    8119 start.go:83] releasing machines lock for "functional-353000", held for 24.479583ms
W0503 15:04:04.406549    8119 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0503 15:04:04.414230    8119 out.go:177] 
W0503 15:04:04.418312    8119 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0503 15:04:04.418345    8119 out.go:239] * 
W0503 15:04:04.421102    8119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0503 15:04:04.429083    8119 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-353000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-353000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.321375ms)

                                                
                                                
** stderr ** 
	error: context "functional-353000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-353000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-353000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-353000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-353000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-353000 --alsologtostderr -v=1] stderr:
I0503 15:04:46.838315    8328 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:46.838713    8328 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:46.838717    8328 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:46.838720    8328 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:46.838893    8328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:46.839124    8328 mustload.go:65] Loading cluster: functional-353000
I0503 15:04:46.839327    8328 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:46.843503    8328 out.go:177] * The control-plane node functional-353000 host is not running: state=Stopped
I0503 15:04:46.847488    8328 out.go:177]   To start a cluster, run: "minikube start -p functional-353000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (43.998125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 status: exit status 7 (55.507958ms)

                                                
                                                
-- stdout --
	functional-353000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-353000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.118333ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-353000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 status -o json: exit status 7 (34.450416ms)

                                                
                                                
-- stdout --
	{"Name":"functional-353000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-353000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (33.621583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-353000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-353000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.164416ms)

                                                
                                                
** stderr ** 
	error: context "functional-353000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-353000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-353000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-353000 describe po hello-node-connect: exit status 1 (26.209792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-353000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-353000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-353000 logs -l app=hello-node-connect: exit status 1 (26.107666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-353000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-353000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-353000 describe svc hello-node-connect: exit status 1 (26.381375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-353000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.232084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-353000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (36.398292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "echo hello": exit status 83 (57.73325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n"*. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "cat /etc/hostname": exit status 83 (43.115959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-353000"- but got *"* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n"*. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (34.513875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.221917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-353000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.077292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-353000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-353000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cp functional-353000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3759504750/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 cp functional-353000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3759504750/001/cp-test.txt: exit status 83 (43.420917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-353000 cp functional-353000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3759504750/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.908542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3759504750/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (45.988709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-353000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (59.619ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-353000 ssh -n functional-353000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-353000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-353000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7768/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/test/nested/copy/7768/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/test/nested/copy/7768/hosts": exit status 83 (42.583833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/test/nested/copy/7768/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-353000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-353000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.461166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7768.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/7768.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/7768.pem": exit status 83 (46.742334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7768.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo cat /etc/ssl/certs/7768.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7768.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-353000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-353000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7768.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /usr/share/ca-certificates/7768.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /usr/share/ca-certificates/7768.pem": exit status 83 (42.358708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7768.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo cat /usr/share/ca-certificates/7768.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7768.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-353000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-353000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.902708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-353000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-353000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/77682.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/77682.pem": exit status 83 (42.839333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/77682.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo cat /etc/ssl/certs/77682.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/77682.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-353000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-353000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /usr/share/ca-certificates/77682.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /usr/share/ca-certificates/77682.pem": exit status 83 (42.748417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/77682.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo cat /usr/share/ca-certificates/77682.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/77682.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-353000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-353000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.708458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-353000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-353000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.461375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-353000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-353000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.497125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-353000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-353000 -n functional-353000: exit status 7 (32.682166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo systemctl is-active crio": exit status 83 (52.262833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0503 15:04:05.101490    8171 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:05.101645    8171 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:05.101652    8171 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:05.101654    8171 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:05.101797    8171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:05.102080    8171 mustload.go:65] Loading cluster: functional-353000
I0503 15:04:05.102289    8171 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:05.106547    8171 out.go:177] * The control-plane node functional-353000 host is not running: state=Stopped
I0503 15:04:05.118517    8171 out.go:177]   To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
stdout: * The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 8170: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-353000": client config: context "functional-353000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (97.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-353000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-353000 get svc nginx-svc: exit status 1 (69.291084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-353000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-353000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (97.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-353000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-353000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.262666ms)

                                                
                                                
** stderr ** 
	error: context "functional-353000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-353000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 service list: exit status 83 (45.869041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-353000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 service list -o json: exit status 83 (44.845958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-353000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 service --namespace=default --https --url hello-node: exit status 83 (43.796458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-353000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 service hello-node --url --format={{.IP}}: exit status 83 (47.84275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-353000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 service hello-node --url: exit status 83 (44.882458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-353000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:1565: failed to parse "* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"": parse "* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 version -o=json --components: exit status 83 (43.990458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-353000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-353000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-353000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-353000 image ls --format short --alsologtostderr:
I0503 15:04:55.984491    8460 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:55.984648    8460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:55.984651    8460 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:55.984653    8460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:55.984779    8460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:55.985210    8460 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:55.985275    8460 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-353000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-353000 image ls --format table --alsologtostderr:
I0503 15:04:56.218404    8472 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:56.218556    8472 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.218559    8472 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:56.218561    8472 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.218704    8472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:56.219096    8472 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:56.219162    8472 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-353000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-353000 image ls --format json --alsologtostderr:
I0503 15:04:56.180732    8470 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:56.180867    8470 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.180870    8470 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:56.180873    8470 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.180997    8470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:56.181390    8470 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:56.181454    8470 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-353000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-353000 image ls --format yaml --alsologtostderr:
I0503 15:04:56.023003    8462 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:56.023142    8462 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.023146    8462 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:56.023148    8462 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.023271    8462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:56.023676    8462 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:56.023738    8462 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh pgrep buildkitd: exit status 83 (44.749875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image build -t localhost/my-image:functional-353000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-353000 image build -t localhost/my-image:functional-353000 testdata/build --alsologtostderr:
I0503 15:04:56.104600    8466 out.go:291] Setting OutFile to fd 1 ...
I0503 15:04:56.105109    8466 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.105113    8466 out.go:304] Setting ErrFile to fd 2...
I0503 15:04:56.105116    8466 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:04:56.105274    8466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:04:56.105755    8466 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:56.106204    8466 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:04:56.106435    8466 build_images.go:133] succeeded building to: 
I0503 15:04:56.106439    8466 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls
functional_test.go:442: expected "localhost/my-image:functional-353000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image load --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-353000 image load --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr: (1.280390167s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-353000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image load --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-353000 image load --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr: (1.307853417s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-353000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.866348334s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-353000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image load --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-353000 image load --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr: (1.169126417s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-353000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image save gcr.io/google-containers/addon-resizer:functional-353000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-353000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-353000 docker-env) && out/minikube-darwin-arm64 status -p functional-353000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-353000 docker-env) && out/minikube-darwin-arm64 status -p functional-353000": exit status 1 (45.372708ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2: exit status 83 (44.715708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:04:56.255621    8474 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:04:56.256026    8474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:56.256030    8474 out.go:304] Setting ErrFile to fd 2...
	I0503 15:04:56.256032    8474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:56.256224    8474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:04:56.256434    8474 mustload.go:65] Loading cluster: functional-353000
	I0503 15:04:56.256624    8474 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:04:56.261015    8474 out.go:177] * The control-plane node functional-353000 host is not running: state=Stopped
	I0503 15:04:56.265009    8474 out.go:177]   To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2: exit status 83 (44.651125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:04:56.345807    8478 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:04:56.345976    8478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:56.345979    8478 out.go:304] Setting ErrFile to fd 2...
	I0503 15:04:56.345982    8478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:56.346108    8478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:04:56.346339    8478 mustload.go:65] Loading cluster: functional-353000
	I0503 15:04:56.346515    8478 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:04:56.351013    8478 out.go:177] * The control-plane node functional-353000 host is not running: state=Stopped
	I0503 15:04:56.355007    8478 out.go:177]   To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2: exit status 83 (44.519292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:04:56.300925    8476 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:04:56.301076    8476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:56.301080    8476 out.go:304] Setting ErrFile to fd 2...
	I0503 15:04:56.301082    8476 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:56.301210    8476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:04:56.301437    8476 mustload.go:65] Loading cluster: functional-353000
	I0503 15:04:56.301625    8476 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:04:56.306045    8476 out.go:177] * The control-plane node functional-353000 host is not running: state=Stopped
	I0503 15:04:56.309963    8476 out.go:177]   To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-353000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-353000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-353000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035538541s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (26.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (26.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.010400375s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:06:34.437447    8520 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:06:34.437828    8520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:06:34.437833    8520 out.go:304] Setting ErrFile to fd 2...
	I0503 15:06:34.437836    8520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:06:34.438019    8520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:06:34.439536    8520 out.go:298] Setting JSON to false
	I0503 15:06:34.455717    8520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3965,"bootTime":1714770029,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:06:34.455777    8520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:06:34.461685    8520 out.go:177] * [ha-688000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:06:34.469664    8520 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:06:34.469702    8520 notify.go:220] Checking for updates...
	I0503 15:06:34.475556    8520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:06:34.478582    8520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:06:34.481621    8520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:06:34.484590    8520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:06:34.487603    8520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:06:34.490887    8520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:06:34.494563    8520 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:06:34.501606    8520 start.go:297] selected driver: qemu2
	I0503 15:06:34.501614    8520 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:06:34.501620    8520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:06:34.504017    8520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:06:34.506654    8520 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:06:34.509702    8520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:06:34.509736    8520 cni.go:84] Creating CNI manager for ""
	I0503 15:06:34.509741    8520 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0503 15:06:34.509744    8520 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0503 15:06:34.509771    8520 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:06:34.514326    8520 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:06:34.521600    8520 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0503 15:06:34.525579    8520 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:06:34.525592    8520 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:06:34.525598    8520 cache.go:56] Caching tarball of preloaded images
	I0503 15:06:34.525664    8520 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:06:34.525669    8520 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:06:34.525853    8520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/ha-688000/config.json ...
	I0503 15:06:34.525867    8520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/ha-688000/config.json: {Name:mke8883ccbfc6e20aca0ec214232c4fd7d2b1341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:06:34.526240    8520 start.go:360] acquireMachinesLock for ha-688000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:06:34.526274    8520 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "ha-688000"
	I0503 15:06:34.526285    8520 start.go:93] Provisioning new machine with config: &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:06:34.526314    8520 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:06:34.531618    8520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:06:34.548454    8520 start.go:159] libmachine.API.Create for "ha-688000" (driver="qemu2")
	I0503 15:06:34.548485    8520 client.go:168] LocalClient.Create starting
	I0503 15:06:34.548554    8520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:06:34.548582    8520 main.go:141] libmachine: Decoding PEM data...
	I0503 15:06:34.548592    8520 main.go:141] libmachine: Parsing certificate...
	I0503 15:06:34.548635    8520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:06:34.548663    8520 main.go:141] libmachine: Decoding PEM data...
	I0503 15:06:34.548669    8520 main.go:141] libmachine: Parsing certificate...
	I0503 15:06:34.549119    8520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:06:34.694029    8520 main.go:141] libmachine: Creating SSH key...
	I0503 15:06:34.772688    8520 main.go:141] libmachine: Creating Disk image...
	I0503 15:06:34.772693    8520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:06:34.772857    8520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:06:34.785675    8520 main.go:141] libmachine: STDOUT: 
	I0503 15:06:34.785701    8520 main.go:141] libmachine: STDERR: 
	I0503 15:06:34.785752    8520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2 +20000M
	I0503 15:06:34.796650    8520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:06:34.796667    8520 main.go:141] libmachine: STDERR: 
	I0503 15:06:34.796680    8520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:06:34.796684    8520 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:06:34.796714    8520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:30:d5:62:23:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:06:34.798474    8520 main.go:141] libmachine: STDOUT: 
	I0503 15:06:34.798490    8520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:06:34.798506    8520 client.go:171] duration metric: took 250.019542ms to LocalClient.Create
	I0503 15:06:36.800648    8520 start.go:128] duration metric: took 2.274346583s to createHost
	I0503 15:06:36.800703    8520 start.go:83] releasing machines lock for "ha-688000", held for 2.27445575s
	W0503 15:06:36.800793    8520 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:06:36.816884    8520 out.go:177] * Deleting "ha-688000" in qemu2 ...
	W0503 15:06:36.850109    8520 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:06:36.850143    8520 start.go:728] Will try again in 5 seconds ...
	I0503 15:06:41.852279    8520 start.go:360] acquireMachinesLock for ha-688000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:06:41.852701    8520 start.go:364] duration metric: took 318.708µs to acquireMachinesLock for "ha-688000"
	I0503 15:06:41.852854    8520 start.go:93] Provisioning new machine with config: &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:06:41.853169    8520 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:06:41.869028    8520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:06:41.919379    8520 start.go:159] libmachine.API.Create for "ha-688000" (driver="qemu2")
	I0503 15:06:41.919433    8520 client.go:168] LocalClient.Create starting
	I0503 15:06:41.919541    8520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:06:41.919605    8520 main.go:141] libmachine: Decoding PEM data...
	I0503 15:06:41.919625    8520 main.go:141] libmachine: Parsing certificate...
	I0503 15:06:41.919710    8520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:06:41.919763    8520 main.go:141] libmachine: Decoding PEM data...
	I0503 15:06:41.919774    8520 main.go:141] libmachine: Parsing certificate...
	I0503 15:06:41.920287    8520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:06:42.078959    8520 main.go:141] libmachine: Creating SSH key...
	I0503 15:06:42.341119    8520 main.go:141] libmachine: Creating Disk image...
	I0503 15:06:42.341130    8520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:06:42.341356    8520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:06:42.354684    8520 main.go:141] libmachine: STDOUT: 
	I0503 15:06:42.354709    8520 main.go:141] libmachine: STDERR: 
	I0503 15:06:42.354778    8520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2 +20000M
	I0503 15:06:42.366005    8520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:06:42.366034    8520 main.go:141] libmachine: STDERR: 
	I0503 15:06:42.366048    8520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:06:42.366054    8520 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:06:42.366092    8520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:57:92:9a:a7:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:06:42.367756    8520 main.go:141] libmachine: STDOUT: 
	I0503 15:06:42.367769    8520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:06:42.367782    8520 client.go:171] duration metric: took 448.350167ms to LocalClient.Create
	I0503 15:06:44.370015    8520 start.go:128] duration metric: took 2.516817s to createHost
	I0503 15:06:44.370087    8520 start.go:83] releasing machines lock for "ha-688000", held for 2.517401916s
	W0503 15:06:44.370482    8520 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:06:44.382092    8520 out.go:177] 
	W0503 15:06:44.389240    8520 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:06:44.389266    8520 out.go:239] * 
	* 
	W0503 15:06:44.392028    8520 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:06:44.402179    8520 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (70.892916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (106.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.72625ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-688000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- rollout status deployment/busybox: exit status 1 (59.419959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.651ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.188916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.003208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.656917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.299042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.248ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (80.097292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.770292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.433542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.429666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.440917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.276167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.023292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.931458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.625125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.365458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (106.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.201208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-688000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.183417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr: exit status 83 (44.612542ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:31.416189    8613 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:31.416751    8613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.416755    8613 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:31.416757    8613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.416902    8613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:31.417150    8613 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:31.417338    8613 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:31.421571    8613 out.go:177] * The control-plane node ha-688000 host is not running: state=Stopped
	I0503 15:08:31.425407    8613 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.125625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-688000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-688000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.601708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-688000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-688000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-688000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.365417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-688000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-688000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.485167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr: exit status 7 (32.061167ms)

                                                
                                                
-- stdout --
	{"Name":"ha-688000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:31.657193    8626 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:31.657364    8626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.657367    8626 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:31.657369    8626 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.657495    8626 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:31.657615    8626 out.go:298] Setting JSON to true
	I0503 15:08:31.657626    8626 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:31.657709    8626 notify.go:220] Checking for updates...
	I0503 15:08:31.657822    8626 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:31.657827    8626 status.go:255] checking status of ha-688000 ...
	I0503 15:08:31.658044    8626 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:31.658048    8626 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:31.658050    8626 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.227375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.053375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:31.722165    8630 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:31.722762    8630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.722765    8630 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:31.722767    8630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.722963    8630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:31.723203    8630 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:31.723399    8630 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:31.727918    8630 out.go:177] 
	W0503 15:08:31.730919    8630 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0503 15:08:31.730927    8630 out.go:239] * 
	* 
	W0503 15:08:31.732795    8630 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:08:31.736907    8630 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (32.211416ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:31.772489    8632 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:31.772639    8632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.772642    8632 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:31.772645    8632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.772777    8632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:31.772891    8632 out.go:298] Setting JSON to false
	I0503 15:08:31.772902    8632 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:31.772968    8632 notify.go:220] Checking for updates...
	I0503 15:08:31.773097    8632 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:31.773102    8632 status.go:255] checking status of ha-688000 ...
	I0503 15:08:31.773308    8632 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:31.773312    8632 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:31.773314    8632 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.214875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.647458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.222375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:31.943728    8642 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:31.944225    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.944230    8642 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:31.944233    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.944395    8642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:31.944666    8642 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:31.944865    8642 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:31.949271    8642 out.go:177] 
	W0503 15:08:31.952268    8642 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0503 15:08:31.952273    8642 out.go:239] * 
	* 
	W0503 15:08:31.954160    8642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:08:31.958291    8642 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0503 15:08:31.943728    8642 out.go:291] Setting OutFile to fd 1 ...
I0503 15:08:31.944225    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:08:31.944230    8642 out.go:304] Setting ErrFile to fd 2...
I0503 15:08:31.944233    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:08:31.944395    8642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:08:31.944666    8642 mustload.go:65] Loading cluster: ha-688000
I0503 15:08:31.944865    8642 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:08:31.949271    8642 out.go:177] 
W0503 15:08:31.952268    8642 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0503 15:08:31.952273    8642 out.go:239] * 
* 
W0503 15:08:31.954160    8642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0503 15:08:31.958291    8642 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (32.379375ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:31.994054    8644 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:31.994221    8644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.994224    8644 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:31.994227    8644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:31.994352    8644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:31.994466    8644 out.go:298] Setting JSON to false
	I0503 15:08:31.994479    8644 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:31.994535    8644 notify.go:220] Checking for updates...
	I0503 15:08:31.994694    8644 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:31.994700    8644 status.go:255] checking status of ha-688000 ...
	I0503 15:08:31.994889    8644 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:31.994893    8644 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:31.994896    8644 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (76.272792ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:33.271084    8646 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:33.271273    8646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:33.271277    8646 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:33.271281    8646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:33.271447    8646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:33.271603    8646 out.go:298] Setting JSON to false
	I0503 15:08:33.271617    8646 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:33.271657    8646 notify.go:220] Checking for updates...
	I0503 15:08:33.271859    8646 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:33.271866    8646 status.go:255] checking status of ha-688000 ...
	I0503 15:08:33.272114    8646 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:33.272118    8646 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:33.272121    8646 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (77.335833ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:35.371188    8648 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:35.371361    8648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:35.371365    8648 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:35.371369    8648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:35.371537    8648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:35.371694    8648 out.go:298] Setting JSON to false
	I0503 15:08:35.371709    8648 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:35.371746    8648 notify.go:220] Checking for updates...
	I0503 15:08:35.372013    8648 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:35.372020    8648 status.go:255] checking status of ha-688000 ...
	I0503 15:08:35.372284    8648 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:35.372289    8648 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:35.372292    8648 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (75.768125ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:38.097400    8656 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:38.097875    8656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:38.097882    8656 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:38.097886    8656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:38.098137    8656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:38.098390    8656 out.go:298] Setting JSON to false
	I0503 15:08:38.098411    8656 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:38.098522    8656 notify.go:220] Checking for updates...
	I0503 15:08:38.098983    8656 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:38.099005    8656 status.go:255] checking status of ha-688000 ...
	I0503 15:08:38.099271    8656 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:38.099277    8656 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:38.099280    8656 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (76.627834ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:41.927744    8658 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:41.927931    8658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:41.927935    8658 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:41.927937    8658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:41.928090    8658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:41.928226    8658 out.go:298] Setting JSON to false
	I0503 15:08:41.928239    8658 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:41.928275    8658 notify.go:220] Checking for updates...
	I0503 15:08:41.928476    8658 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:41.928482    8658 status.go:255] checking status of ha-688000 ...
	I0503 15:08:41.928711    8658 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:41.928715    8658 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:41.928718    8658 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (75.794833ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:45.188201    8660 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:45.188385    8660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:45.188389    8660 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:45.188392    8660 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:45.188545    8660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:45.188697    8660 out.go:298] Setting JSON to false
	I0503 15:08:45.188711    8660 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:45.188747    8660 notify.go:220] Checking for updates...
	I0503 15:08:45.188972    8660 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:45.188979    8660 status.go:255] checking status of ha-688000 ...
	I0503 15:08:45.189254    8660 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:45.189259    8660 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:45.189262    8660 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (75.87625ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:49.206179    8662 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:49.206335    8662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:49.206339    8662 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:49.206343    8662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:49.206495    8662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:49.206641    8662 out.go:298] Setting JSON to false
	I0503 15:08:49.206654    8662 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:49.206689    8662 notify.go:220] Checking for updates...
	I0503 15:08:49.206911    8662 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:49.206917    8662 status.go:255] checking status of ha-688000 ...
	I0503 15:08:49.207158    8662 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:49.207162    8662 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:49.207165    8662 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (75.612416ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:08:56.847690    8664 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:08:56.847876    8664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:56.847880    8664 out.go:304] Setting ErrFile to fd 2...
	I0503 15:08:56.847883    8664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:08:56.848051    8664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:08:56.848197    8664 out.go:298] Setting JSON to false
	I0503 15:08:56.848210    8664 mustload.go:65] Loading cluster: ha-688000
	I0503 15:08:56.848241    8664 notify.go:220] Checking for updates...
	I0503 15:08:56.848440    8664 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:08:56.848446    8664 status.go:255] checking status of ha-688000 ...
	I0503 15:08:56.848722    8664 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:08:56.848726    8664 status.go:343] host is not running, skipping remaining checks
	I0503 15:08:56.848729    8664 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (78.129083ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:16.652478    8672 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:16.652646    8672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:16.652650    8672 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:16.652654    8672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:16.652815    8672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:16.652986    8672 out.go:298] Setting JSON to false
	I0503 15:09:16.653000    8672 mustload.go:65] Loading cluster: ha-688000
	I0503 15:09:16.653035    8672 notify.go:220] Checking for updates...
	I0503 15:09:16.653271    8672 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:16.653278    8672 status.go:255] checking status of ha-688000 ...
	I0503 15:09:16.653533    8672 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:09:16.653537    8672 status.go:343] host is not running, skipping remaining checks
	I0503 15:09:16.653540    8672 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (34.55275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (44.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-688000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-688000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.486167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-688000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-688000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-688000 -v=7 --alsologtostderr: (2.94546225s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.218395334s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:19.837035    8702 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:19.837211    8702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:19.837215    8702 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:19.837217    8702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:19.837358    8702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:19.838578    8702 out.go:298] Setting JSON to false
	I0503 15:09:19.857994    8702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4130,"bootTime":1714770029,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:09:19.858060    8702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:09:19.862614    8702 out.go:177] * [ha-688000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:09:19.871600    8702 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:09:19.875523    8702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:09:19.871695    8702 notify.go:220] Checking for updates...
	I0503 15:09:19.881531    8702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:09:19.884496    8702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:09:19.887484    8702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:09:19.890544    8702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:09:19.892110    8702 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:19.892160    8702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:09:19.896521    8702 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:09:19.903355    8702 start.go:297] selected driver: qemu2
	I0503 15:09:19.903362    8702 start.go:901] validating driver "qemu2" against &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:09:19.903408    8702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:09:19.905576    8702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:09:19.905611    8702 cni.go:84] Creating CNI manager for ""
	I0503 15:09:19.905616    8702 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0503 15:09:19.905657    8702 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:09:19.909875    8702 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:09:19.916514    8702 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0503 15:09:19.920481    8702 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:09:19.920493    8702 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:09:19.920502    8702 cache.go:56] Caching tarball of preloaded images
	I0503 15:09:19.920551    8702 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:09:19.920557    8702 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:09:19.920606    8702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/ha-688000/config.json ...
	I0503 15:09:19.921023    8702 start.go:360] acquireMachinesLock for ha-688000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:09:19.921057    8702 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "ha-688000"
	I0503 15:09:19.921070    8702 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:09:19.921075    8702 fix.go:54] fixHost starting: 
	I0503 15:09:19.921187    8702 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0503 15:09:19.921196    8702 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:09:19.929479    8702 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0503 15:09:19.933531    8702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:57:92:9a:a7:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:09:19.935569    8702 main.go:141] libmachine: STDOUT: 
	I0503 15:09:19.935588    8702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:09:19.935617    8702 fix.go:56] duration metric: took 14.540792ms for fixHost
	I0503 15:09:19.935621    8702 start.go:83] releasing machines lock for "ha-688000", held for 14.5565ms
	W0503 15:09:19.935628    8702 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:09:19.935667    8702 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:09:19.935671    8702 start.go:728] Will try again in 5 seconds ...
	I0503 15:09:24.937715    8702 start.go:360] acquireMachinesLock for ha-688000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:09:24.938189    8702 start.go:364] duration metric: took 368.333µs to acquireMachinesLock for "ha-688000"
	I0503 15:09:24.938314    8702 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:09:24.938338    8702 fix.go:54] fixHost starting: 
	I0503 15:09:24.939035    8702 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0503 15:09:24.939063    8702 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:09:24.943400    8702 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0503 15:09:24.947539    8702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:57:92:9a:a7:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:09:24.956480    8702 main.go:141] libmachine: STDOUT: 
	I0503 15:09:24.956542    8702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:09:24.956612    8702 fix.go:56] duration metric: took 18.277208ms for fixHost
	I0503 15:09:24.956624    8702 start.go:83] releasing machines lock for "ha-688000", held for 18.401333ms
	W0503 15:09:24.956771    8702 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:09:24.964367    8702 out.go:177] 
	W0503 15:09:24.968441    8702 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:09:24.968466    8702 out.go:239] * 
	* 
	W0503 15:09:24.970914    8702 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:09:24.977429    8702 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-688000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-688000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (35.118917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.177583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:25.132032    8714 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:25.132458    8714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:25.132462    8714 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:25.132465    8714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:25.132631    8714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:25.132829    8714 mustload.go:65] Loading cluster: ha-688000
	I0503 15:09:25.133028    8714 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:25.135883    8714 out.go:177] * The control-plane node ha-688000 host is not running: state=Stopped
	I0503 15:09:25.138963    8714 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (31.922625ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:25.173111    8716 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:25.173253    8716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:25.173257    8716 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:25.173259    8716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:25.173386    8716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:25.173504    8716 out.go:298] Setting JSON to false
	I0503 15:09:25.173515    8716 mustload.go:65] Loading cluster: ha-688000
	I0503 15:09:25.173578    8716 notify.go:220] Checking for updates...
	I0503 15:09:25.173703    8716 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:25.173709    8716 status.go:255] checking status of ha-688000 ...
	I0503 15:09:25.173911    8716 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:09:25.173915    8716 status.go:343] host is not running, skipping remaining checks
	I0503 15:09:25.173917    8716 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.542875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.304208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr: (2.026868792s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (70.049667ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:27.409635    8738 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:27.409817    8738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:27.409821    8738 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:27.409824    8738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:27.409982    8738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:27.410124    8738 out.go:298] Setting JSON to false
	I0503 15:09:27.410137    8738 mustload.go:65] Loading cluster: ha-688000
	I0503 15:09:27.410175    8738 notify.go:220] Checking for updates...
	I0503 15:09:27.410395    8738 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:27.410404    8738 status.go:255] checking status of ha-688000 ...
	I0503 15:09:27.410647    8738 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0503 15:09:27.410651    8738 status.go:343] host is not running, skipping remaining checks
	I0503 15:09:27.410654    8738 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (34.007833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.18121775s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:27.475837    8742 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:27.475961    8742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:27.475964    8742 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:27.475967    8742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:27.476084    8742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:27.477078    8742 out.go:298] Setting JSON to false
	I0503 15:09:27.493254    8742 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4138,"bootTime":1714770029,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:09:27.493312    8742 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:09:27.498658    8742 out.go:177] * [ha-688000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:09:27.504242    8742 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:09:27.508659    8742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:09:27.504280    8742 notify.go:220] Checking for updates...
	I0503 15:09:27.511677    8742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:09:27.513064    8742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:09:27.515636    8742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:09:27.518643    8742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:09:27.521927    8742 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:27.522176    8742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:09:27.526577    8742 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:09:27.533626    8742 start.go:297] selected driver: qemu2
	I0503 15:09:27.533632    8742 start.go:901] validating driver "qemu2" against &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:09:27.533681    8742 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:09:27.535901    8742 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:09:27.535934    8742 cni.go:84] Creating CNI manager for ""
	I0503 15:09:27.535939    8742 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0503 15:09:27.535980    8742 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-688000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:09:27.540059    8742 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:09:27.547637    8742 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0503 15:09:27.551592    8742 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:09:27.551605    8742 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:09:27.551619    8742 cache.go:56] Caching tarball of preloaded images
	I0503 15:09:27.551663    8742 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:09:27.551668    8742 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:09:27.551714    8742 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/ha-688000/config.json ...
	I0503 15:09:27.552115    8742 start.go:360] acquireMachinesLock for ha-688000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:09:27.552142    8742 start.go:364] duration metric: took 21µs to acquireMachinesLock for "ha-688000"
	I0503 15:09:27.552151    8742 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:09:27.552157    8742 fix.go:54] fixHost starting: 
	I0503 15:09:27.552270    8742 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0503 15:09:27.552279    8742 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:09:27.556671    8742 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0503 15:09:27.564635    8742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:57:92:9a:a7:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:09:27.566649    8742 main.go:141] libmachine: STDOUT: 
	I0503 15:09:27.566669    8742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:09:27.566695    8742 fix.go:56] duration metric: took 14.538834ms for fixHost
	I0503 15:09:27.566697    8742 start.go:83] releasing machines lock for "ha-688000", held for 14.551542ms
	W0503 15:09:27.566705    8742 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:09:27.566735    8742 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:09:27.566739    8742 start.go:728] Will try again in 5 seconds ...
	I0503 15:09:32.568791    8742 start.go:360] acquireMachinesLock for ha-688000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:09:32.569291    8742 start.go:364] duration metric: took 396.25µs to acquireMachinesLock for "ha-688000"
	I0503 15:09:32.569416    8742 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:09:32.569437    8742 fix.go:54] fixHost starting: 
	I0503 15:09:32.570101    8742 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0503 15:09:32.570132    8742 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:09:32.578480    8742 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0503 15:09:32.582648    8742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:57:92:9a:a7:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/ha-688000/disk.qcow2
	I0503 15:09:32.591699    8742 main.go:141] libmachine: STDOUT: 
	I0503 15:09:32.591774    8742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:09:32.591847    8742 fix.go:56] duration metric: took 22.411041ms for fixHost
	I0503 15:09:32.591863    8742 start.go:83] releasing machines lock for "ha-688000", held for 22.5475ms
	W0503 15:09:32.592012    8742 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:09:32.598537    8742 out.go:177] 
	W0503 15:09:32.602493    8742 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:09:32.602519    8742 out.go:239] * 
	* 
	W0503 15:09:32.605472    8742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:09:32.613433    8742 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (70.761583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (32.702458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-688000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-688000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.443375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:09:32.839899    8761 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:09:32.840054    8761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:32.840057    8761 out.go:304] Setting ErrFile to fd 2...
	I0503 15:09:32.840060    8761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:09:32.840190    8761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:09:32.840429    8761 mustload.go:65] Loading cluster: ha-688000
	I0503 15:09:32.840633    8761 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:09:32.843923    8761 out.go:177] * The control-plane node ha-688000 host is not running: state=Stopped
	I0503 15:09:32.847901    8761 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-688000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (34.618542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-688000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-688000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (31.391459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-955000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-955000 --driver=qemu2 : exit status 80 (9.845272208s)

                                                
                                                
-- stdout --
	* [image-955000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-955000" primary control-plane node in "image-955000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-955000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-955000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-955000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-955000 -n image-955000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-955000 -n image-955000: exit status 7 (69.914666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-955000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-666000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-666000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.872595542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c6c61747-3cc3-486d-a089-48118af25d6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-666000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f63f1577-cb96-4eb1-8288-1351cb3bb08b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18793"}}
	{"specversion":"1.0","id":"e71bdc28-92fd-49a9-8762-2dff3a5ec73e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig"}}
	{"specversion":"1.0","id":"d680396e-b936-4721-a903-62cbc1cb1e7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"823a2bfe-ee28-4291-8972-41e696953fd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38efb577-5305-40a1-94bb-c0eb9b25872c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube"}}
	{"specversion":"1.0","id":"f164abd5-8756-4d4d-b0f8-f6313b1f1794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f366d454-6bca-413d-bcee-3e74b7821431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"65f0bacf-248d-496a-a78e-6176093b1866","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dd17305d-8444-4c07-986b-a474d97c08fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-666000\" primary control-plane node in \"json-output-666000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"978593a6-0d4a-4b1a-9432-9d1342f544f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"bbbf70c0-634c-43c8-b765-0b79defcc95e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-666000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6b5196a-67e6-4284-b6f5-20d136515dc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0f704035-0627-4903-84df-80e22ded23a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c0ddcd1c-c053-48b1-afbc-c3b730a6ec91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-666000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"810d9a78-6ee3-4a61-85da-07a5ce59a727","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d3b54381-ad17-4724-a9fc-2db7cde7673e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-666000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-666000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-666000 --output=json --user=testUser: exit status 83 (82.581ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d652ff87-a6a7-4740-90b9-a43e196ae05b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-666000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"e32fcc16-f62c-44be-80f4-5035f60b3fb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-666000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-666000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-666000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-666000 --output=json --user=testUser: exit status 83 (45.255625ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-666000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-666000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-666000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-666000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-012000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-012000 --driver=qemu2 : exit status 80 (9.896193875s)

                                                
                                                
-- stdout --
	* [first-012000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-012000" primary control-plane node in "first-012000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-012000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-012000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-012000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-03 15:10:06.982449 -0700 PDT m=+441.218328084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-014000 -n second-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-014000 -n second-014000: exit status 85 (82.334709ms)

                                                
                                                
-- stdout --
	* Profile "second-014000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-014000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-014000" host is not running, skipping log retrieval (state="* Profile \"second-014000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-014000\"")
helpers_test.go:175: Cleaning up "second-014000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-014000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-05-03 15:10:07.295548 -0700 PDT m=+441.531434751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-012000 -n first-012000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-012000 -n first-012000: exit status 7 (32.1505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-012000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-012000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-012000
--- FAIL: TestMinikubeProfile (10.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-499000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-499000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.134598083s)

                                                
                                                
-- stdout --
	* [mount-start-1-499000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-499000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-499000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-499000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-499000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-499000 -n mount-start-1-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-499000 -n mount-start-1-499000: exit status 7 (70.153041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-499000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-952000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-952000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.170425459s)

                                                
                                                
-- stdout --
	* [multinode-952000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-952000" primary control-plane node in "multinode-952000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-952000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:10:17.987294    8926 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:10:17.987429    8926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:10:17.987433    8926 out.go:304] Setting ErrFile to fd 2...
	I0503 15:10:17.987439    8926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:10:17.987553    8926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:10:17.988617    8926 out.go:298] Setting JSON to false
	I0503 15:10:18.004612    8926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4188,"bootTime":1714770029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:10:18.004686    8926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:10:18.009964    8926 out.go:177] * [multinode-952000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:10:18.017875    8926 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:10:18.017916    8926 notify.go:220] Checking for updates...
	I0503 15:10:18.023319    8926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:10:18.026853    8926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:10:18.029905    8926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:10:18.032927    8926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:10:18.035915    8926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:10:18.039069    8926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:10:18.043868    8926 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:10:18.050882    8926 start.go:297] selected driver: qemu2
	I0503 15:10:18.050892    8926 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:10:18.050900    8926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:10:18.053126    8926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:10:18.055872    8926 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:10:18.058897    8926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:10:18.058937    8926 cni.go:84] Creating CNI manager for ""
	I0503 15:10:18.058942    8926 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0503 15:10:18.058948    8926 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0503 15:10:18.058989    8926 start.go:340] cluster config:
	{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:10:18.063544    8926 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:10:18.070743    8926 out.go:177] * Starting "multinode-952000" primary control-plane node in "multinode-952000" cluster
	I0503 15:10:18.074833    8926 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:10:18.074858    8926 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:10:18.074868    8926 cache.go:56] Caching tarball of preloaded images
	I0503 15:10:18.074937    8926 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:10:18.074943    8926 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:10:18.075140    8926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/multinode-952000/config.json ...
	I0503 15:10:18.075160    8926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/multinode-952000/config.json: {Name:mkf8f59fd820ec9e445d20c1edd940eb129365ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:10:18.075403    8926 start.go:360] acquireMachinesLock for multinode-952000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:10:18.075440    8926 start.go:364] duration metric: took 31.25µs to acquireMachinesLock for "multinode-952000"
	I0503 15:10:18.075461    8926 start.go:93] Provisioning new machine with config: &{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:10:18.075486    8926 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:10:18.083837    8926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:10:18.101009    8926 start.go:159] libmachine.API.Create for "multinode-952000" (driver="qemu2")
	I0503 15:10:18.101043    8926 client.go:168] LocalClient.Create starting
	I0503 15:10:18.101109    8926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:10:18.101139    8926 main.go:141] libmachine: Decoding PEM data...
	I0503 15:10:18.101150    8926 main.go:141] libmachine: Parsing certificate...
	I0503 15:10:18.101191    8926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:10:18.101213    8926 main.go:141] libmachine: Decoding PEM data...
	I0503 15:10:18.101219    8926 main.go:141] libmachine: Parsing certificate...
	I0503 15:10:18.101588    8926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:10:18.245567    8926 main.go:141] libmachine: Creating SSH key...
	I0503 15:10:18.424824    8926 main.go:141] libmachine: Creating Disk image...
	I0503 15:10:18.424830    8926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:10:18.425023    8926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:10:18.438119    8926 main.go:141] libmachine: STDOUT: 
	I0503 15:10:18.438136    8926 main.go:141] libmachine: STDERR: 
	I0503 15:10:18.438192    8926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2 +20000M
	I0503 15:10:18.449191    8926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:10:18.449219    8926 main.go:141] libmachine: STDERR: 
	I0503 15:10:18.449240    8926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:10:18.449246    8926 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:10:18.449272    8926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:87:83:e4:2a:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:10:18.451057    8926 main.go:141] libmachine: STDOUT: 
	I0503 15:10:18.451074    8926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:10:18.451095    8926 client.go:171] duration metric: took 350.05575ms to LocalClient.Create
	I0503 15:10:20.453342    8926 start.go:128] duration metric: took 2.377804792s to createHost
	I0503 15:10:20.453422    8926 start.go:83] releasing machines lock for "multinode-952000", held for 2.378027459s
	W0503 15:10:20.453480    8926 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:10:20.468950    8926 out.go:177] * Deleting "multinode-952000" in qemu2 ...
	W0503 15:10:20.496103    8926 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:10:20.496137    8926 start.go:728] Will try again in 5 seconds ...
	I0503 15:10:25.498255    8926 start.go:360] acquireMachinesLock for multinode-952000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:10:25.498688    8926 start.go:364] duration metric: took 329.542µs to acquireMachinesLock for "multinode-952000"
	I0503 15:10:25.498781    8926 start.go:93] Provisioning new machine with config: &{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:10:25.499228    8926 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:10:25.509717    8926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:10:25.560220    8926 start.go:159] libmachine.API.Create for "multinode-952000" (driver="qemu2")
	I0503 15:10:25.560269    8926 client.go:168] LocalClient.Create starting
	I0503 15:10:25.560397    8926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:10:25.560460    8926 main.go:141] libmachine: Decoding PEM data...
	I0503 15:10:25.560480    8926 main.go:141] libmachine: Parsing certificate...
	I0503 15:10:25.560546    8926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:10:25.560588    8926 main.go:141] libmachine: Decoding PEM data...
	I0503 15:10:25.560599    8926 main.go:141] libmachine: Parsing certificate...
	I0503 15:10:25.561091    8926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:10:25.714238    8926 main.go:141] libmachine: Creating SSH key...
	I0503 15:10:26.054781    8926 main.go:141] libmachine: Creating Disk image...
	I0503 15:10:26.054791    8926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:10:26.055051    8926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:10:26.068571    8926 main.go:141] libmachine: STDOUT: 
	I0503 15:10:26.068589    8926 main.go:141] libmachine: STDERR: 
	I0503 15:10:26.068654    8926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2 +20000M
	I0503 15:10:26.079767    8926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:10:26.079785    8926 main.go:141] libmachine: STDERR: 
	I0503 15:10:26.079797    8926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:10:26.079801    8926 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:10:26.079835    8926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:10:c3:10:46:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:10:26.081535    8926 main.go:141] libmachine: STDOUT: 
	I0503 15:10:26.081550    8926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:10:26.081562    8926 client.go:171] duration metric: took 521.300542ms to LocalClient.Create
	I0503 15:10:28.083024    8926 start.go:128] duration metric: took 2.583780167s to createHost
	I0503 15:10:28.083114    8926 start.go:83] releasing machines lock for "multinode-952000", held for 2.584459916s
	W0503 15:10:28.083564    8926 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-952000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-952000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:10:28.096066    8926 out.go:177] 
	W0503 15:10:28.101042    8926 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:10:28.101067    8926 out.go:239] * 
	* 
	W0503 15:10:28.104160    8926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:10:28.111980    8926 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-952000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (68.85525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (95.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.169458ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-952000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- rollout status deployment/busybox: exit status 1 (59.548958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.588917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.150667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.785125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.666042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.511083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.525125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.762792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.921916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.390541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.549791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.096875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.708375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.96225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.62575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.726917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.375209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (95.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-952000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.490083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.530792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-952000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-952000 -v 3 --alsologtostderr: exit status 83 (44.25475ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-952000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-952000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:03.672385    9042 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:03.672538    9042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:03.672541    9042 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:03.672543    9042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:03.672659    9042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:03.672898    9042 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:03.673085    9042 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:03.676545    9042 out.go:177] * The control-plane node multinode-952000 host is not running: state=Stopped
	I0503 15:12:03.680500    9042 out.go:177]   To start a cluster, run: "minikube start -p multinode-952000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-952000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.70575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-952000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-952000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.161083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-952000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-952000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-952000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.506959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-952000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-952000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-952000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-952000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (31.893792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status --output json --alsologtostderr: exit status 7 (32.290334ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-952000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:03.913367    9055 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:03.913498    9055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:03.913501    9055 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:03.913503    9055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:03.913629    9055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:03.913755    9055 out.go:298] Setting JSON to true
	I0503 15:12:03.913767    9055 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:03.913835    9055 notify.go:220] Checking for updates...
	I0503 15:12:03.913973    9055 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:03.913979    9055 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:03.914185    9055 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:03.914189    9055 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:03.914191    9055 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-952000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.403916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 node stop m03: exit status 85 (50.27125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-952000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status: exit status 7 (32.691166ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr: exit status 7 (32.088084ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:04.061544    9063 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:04.061706    9063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:04.061709    9063 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:04.061712    9063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:04.061846    9063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:04.061976    9063 out.go:298] Setting JSON to false
	I0503 15:12:04.061990    9063 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:04.062056    9063 notify.go:220] Checking for updates...
	I0503 15:12:04.062209    9063 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:04.062215    9063 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:04.062430    9063 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:04.062434    9063 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:04.062436    9063 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr": multinode-952000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.67625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.979792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:04.126826    9067 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:04.127215    9067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:04.127219    9067 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:04.127221    9067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:04.127356    9067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:04.127572    9067 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:04.127755    9067 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:04.132442    9067 out.go:177] 
	W0503 15:12:04.135450    9067 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0503 15:12:04.135455    9067 out.go:239] * 
	* 
	W0503 15:12:04.137386    9067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:12:04.141466    9067 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0503 15:12:04.126826    9067 out.go:291] Setting OutFile to fd 1 ...
I0503 15:12:04.127215    9067 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:12:04.127219    9067 out.go:304] Setting ErrFile to fd 2...
I0503 15:12:04.127221    9067 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0503 15:12:04.127356    9067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
I0503 15:12:04.127572    9067 mustload.go:65] Loading cluster: multinode-952000
I0503 15:12:04.127755    9067 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0503 15:12:04.132442    9067 out.go:177] 
W0503 15:12:04.135450    9067 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0503 15:12:04.135455    9067 out.go:239] * 
* 
W0503 15:12:04.137386    9067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0503 15:12:04.141466    9067 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-952000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (32.351292ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:04.177036    9069 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:04.177184    9069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:04.177188    9069 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:04.177190    9069 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:04.177320    9069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:04.177445    9069 out.go:298] Setting JSON to false
	I0503 15:12:04.177459    9069 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:04.177505    9069 notify.go:220] Checking for updates...
	I0503 15:12:04.177665    9069 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:04.177671    9069 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:04.177886    9069 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:04.177889    9069 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:04.177891    9069 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (76.504334ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:05.552350    9071 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:05.552560    9071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:05.552565    9071 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:05.552568    9071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:05.552742    9071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:05.552893    9071 out.go:298] Setting JSON to false
	I0503 15:12:05.552908    9071 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:05.552941    9071 notify.go:220] Checking for updates...
	I0503 15:12:05.553188    9071 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:05.553196    9071 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:05.553449    9071 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:05.553453    9071 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:05.553456    9071 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (76.316625ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:06.889604    9075 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:06.889815    9075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:06.889819    9075 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:06.889822    9075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:06.889999    9075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:06.890152    9075 out.go:298] Setting JSON to false
	I0503 15:12:06.890167    9075 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:06.890201    9075 notify.go:220] Checking for updates...
	I0503 15:12:06.890423    9075 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:06.890430    9075 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:06.890697    9075 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:06.890701    9075 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:06.890704    9075 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (78.075458ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:09.629057    9077 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:09.629231    9077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:09.629236    9077 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:09.629238    9077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:09.629403    9077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:09.629553    9077 out.go:298] Setting JSON to false
	I0503 15:12:09.629567    9077 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:09.629606    9077 notify.go:220] Checking for updates...
	I0503 15:12:09.629804    9077 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:09.629810    9077 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:09.630101    9077 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:09.630106    9077 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:09.630109    9077 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (76.700916ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:14.417871    9083 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:14.418072    9083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:14.418076    9083 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:14.418079    9083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:14.418222    9083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:14.418379    9083 out.go:298] Setting JSON to false
	I0503 15:12:14.418393    9083 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:14.418428    9083 notify.go:220] Checking for updates...
	I0503 15:12:14.418654    9083 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:14.418661    9083 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:14.418918    9083 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:14.418922    9083 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:14.418925    9083 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (79.609583ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:20.247488    9088 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:20.247688    9088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:20.247693    9088 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:20.247696    9088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:20.247862    9088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:20.248033    9088 out.go:298] Setting JSON to false
	I0503 15:12:20.248047    9088 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:20.248078    9088 notify.go:220] Checking for updates...
	I0503 15:12:20.248302    9088 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:20.248309    9088 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:20.248612    9088 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:20.248616    9088 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:20.248619    9088 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (78.304916ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:28.897486    9090 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:28.897667    9090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:28.897671    9090 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:28.897674    9090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:28.897840    9090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:28.897999    9090 out.go:298] Setting JSON to false
	I0503 15:12:28.898012    9090 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:28.898044    9090 notify.go:220] Checking for updates...
	I0503 15:12:28.898245    9090 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:28.898251    9090 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:28.898484    9090 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:28.898488    9090 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:28.898490    9090 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (77.523667ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:12:39.944969    9097 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:12:39.945183    9097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:39.945187    9097 out.go:304] Setting ErrFile to fd 2...
	I0503 15:12:39.945190    9097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:12:39.945348    9097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:12:39.945536    9097 out.go:298] Setting JSON to false
	I0503 15:12:39.945551    9097 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:12:39.945596    9097 notify.go:220] Checking for updates...
	I0503 15:12:39.945815    9097 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:12:39.945822    9097 status.go:255] checking status of multinode-952000 ...
	I0503 15:12:39.946141    9097 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:12:39.946146    9097 status.go:343] host is not running, skipping remaining checks
	I0503 15:12:39.946149    9097 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr: exit status 7 (75.388166ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:13:01.570076    9104 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:13:01.570274    9104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:01.570278    9104 out.go:304] Setting ErrFile to fd 2...
	I0503 15:13:01.570281    9104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:01.570446    9104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:13:01.570618    9104 out.go:298] Setting JSON to false
	I0503 15:13:01.570633    9104 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:13:01.570676    9104 notify.go:220] Checking for updates...
	I0503 15:13:01.570883    9104 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:13:01.570890    9104 status.go:255] checking status of multinode-952000 ...
	I0503 15:13:01.571170    9104 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:13:01.571175    9104 status.go:343] host is not running, skipping remaining checks
	I0503 15:13:01.571178    9104 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-952000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (35.250083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-952000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-952000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-952000: (2.955892542s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-952000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-952000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.229581292s)

                                                
                                                
-- stdout --
	* [multinode-952000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-952000" primary control-plane node in "multinode-952000" cluster
	* Restarting existing qemu2 VM for "multinode-952000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-952000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:13:04.665004    9130 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:13:04.665182    9130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:04.665187    9130 out.go:304] Setting ErrFile to fd 2...
	I0503 15:13:04.665191    9130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:04.665360    9130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:13:04.666541    9130 out.go:298] Setting JSON to false
	I0503 15:13:04.685528    9130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4355,"bootTime":1714770029,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:13:04.685602    9130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:13:04.690420    9130 out.go:177] * [multinode-952000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:13:04.698568    9130 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:13:04.698616    9130 notify.go:220] Checking for updates...
	I0503 15:13:04.704478    9130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:13:04.707477    9130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:13:04.710425    9130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:13:04.713531    9130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:13:04.716516    9130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:13:04.719874    9130 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:13:04.719941    9130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:13:04.724375    9130 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:13:04.731375    9130 start.go:297] selected driver: qemu2
	I0503 15:13:04.731383    9130 start.go:901] validating driver "qemu2" against &{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:13:04.731431    9130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:13:04.733880    9130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:13:04.733930    9130 cni.go:84] Creating CNI manager for ""
	I0503 15:13:04.733936    9130 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0503 15:13:04.733989    9130 start.go:340] cluster config:
	{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:13:04.738604    9130 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:04.745255    9130 out.go:177] * Starting "multinode-952000" primary control-plane node in "multinode-952000" cluster
	I0503 15:13:04.749401    9130 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:13:04.749418    9130 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:13:04.749424    9130 cache.go:56] Caching tarball of preloaded images
	I0503 15:13:04.749484    9130 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:13:04.749489    9130 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:13:04.749534    9130 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/multinode-952000/config.json ...
	I0503 15:13:04.750001    9130 start.go:360] acquireMachinesLock for multinode-952000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:13:04.750040    9130 start.go:364] duration metric: took 31.667µs to acquireMachinesLock for "multinode-952000"
	I0503 15:13:04.750050    9130 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:13:04.750057    9130 fix.go:54] fixHost starting: 
	I0503 15:13:04.750184    9130 fix.go:112] recreateIfNeeded on multinode-952000: state=Stopped err=<nil>
	W0503 15:13:04.750196    9130 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:13:04.757467    9130 out.go:177] * Restarting existing qemu2 VM for "multinode-952000" ...
	I0503 15:13:04.761491    9130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:10:c3:10:46:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:13:04.763676    9130 main.go:141] libmachine: STDOUT: 
	I0503 15:13:04.763697    9130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:13:04.763725    9130 fix.go:56] duration metric: took 13.667709ms for fixHost
	I0503 15:13:04.763730    9130 start.go:83] releasing machines lock for "multinode-952000", held for 13.686ms
	W0503 15:13:04.763737    9130 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:13:04.763772    9130 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:04.763777    9130 start.go:728] Will try again in 5 seconds ...
	I0503 15:13:09.765941    9130 start.go:360] acquireMachinesLock for multinode-952000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:13:09.766466    9130 start.go:364] duration metric: took 398.833µs to acquireMachinesLock for "multinode-952000"
	I0503 15:13:09.766631    9130 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:13:09.766654    9130 fix.go:54] fixHost starting: 
	I0503 15:13:09.767537    9130 fix.go:112] recreateIfNeeded on multinode-952000: state=Stopped err=<nil>
	W0503 15:13:09.767563    9130 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:13:09.772099    9130 out.go:177] * Restarting existing qemu2 VM for "multinode-952000" ...
	I0503 15:13:09.779234    9130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:10:c3:10:46:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:13:09.789081    9130 main.go:141] libmachine: STDOUT: 
	I0503 15:13:09.789156    9130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:13:09.789248    9130 fix.go:56] duration metric: took 22.596917ms for fixHost
	I0503 15:13:09.789271    9130 start.go:83] releasing machines lock for "multinode-952000", held for 22.778875ms
	W0503 15:13:09.789468    9130 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-952000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-952000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:09.797038    9130 out.go:177] 
	W0503 15:13:09.801248    9130 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:13:09.801285    9130 out.go:239] * 
	* 
	W0503 15:13:09.803953    9130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:13:09.811104    9130 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-952000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-952000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (34.362042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 node delete m03: exit status 83 (44.971042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-952000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-952000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-952000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr: exit status 7 (32.337416ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:13:10.006970    9146 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:13:10.007120    9146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:10.007123    9146 out.go:304] Setting ErrFile to fd 2...
	I0503 15:13:10.007125    9146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:10.007249    9146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:13:10.007351    9146 out.go:298] Setting JSON to false
	I0503 15:13:10.007362    9146 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:13:10.007429    9146 notify.go:220] Checking for updates...
	I0503 15:13:10.007573    9146 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:13:10.007579    9146 status.go:255] checking status of multinode-952000 ...
	I0503 15:13:10.007778    9146 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:13:10.007782    9146 status.go:343] host is not running, skipping remaining checks
	I0503 15:13:10.007784    9146 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.405709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-952000 stop: (3.438861916s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status: exit status 7 (66.106792ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr: exit status 7 (33.919417ms)

                                                
                                                
-- stdout --
	multinode-952000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:13:13.578784    9173 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:13:13.578933    9173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:13.578936    9173 out.go:304] Setting ErrFile to fd 2...
	I0503 15:13:13.578938    9173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:13.579076    9173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:13:13.579200    9173 out.go:298] Setting JSON to false
	I0503 15:13:13.579212    9173 mustload.go:65] Loading cluster: multinode-952000
	I0503 15:13:13.579269    9173 notify.go:220] Checking for updates...
	I0503 15:13:13.579407    9173 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:13:13.579413    9173 status.go:255] checking status of multinode-952000 ...
	I0503 15:13:13.579633    9173 status.go:330] multinode-952000 host status = "Stopped" (err=<nil>)
	I0503 15:13:13.579637    9173 status.go:343] host is not running, skipping remaining checks
	I0503 15:13:13.579639    9173 status.go:257] multinode-952000 status: &{Name:multinode-952000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr": multinode-952000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-952000 status --alsologtostderr": multinode-952000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-952000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-952000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184795458s)

                                                
                                                
-- stdout --
	* [multinode-952000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-952000" primary control-plane node in "multinode-952000" cluster
	* Restarting existing qemu2 VM for "multinode-952000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-952000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:13:13.643230    9177 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:13:13.643373    9177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:13.643376    9177 out.go:304] Setting ErrFile to fd 2...
	I0503 15:13:13.643378    9177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:13.643500    9177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:13:13.644463    9177 out.go:298] Setting JSON to false
	I0503 15:13:13.660481    9177 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4364,"bootTime":1714770029,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:13:13.660542    9177 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:13:13.664481    9177 out.go:177] * [multinode-952000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:13:13.672557    9177 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:13:13.676539    9177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:13:13.672618    9177 notify.go:220] Checking for updates...
	I0503 15:13:13.679504    9177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:13:13.682640    9177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:13:13.685382    9177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:13:13.688464    9177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:13:13.691792    9177 config.go:182] Loaded profile config "multinode-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:13:13.692056    9177 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:13:13.696478    9177 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:13:13.703403    9177 start.go:297] selected driver: qemu2
	I0503 15:13:13.703409    9177 start.go:901] validating driver "qemu2" against &{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:13:13.703460    9177 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:13:13.705764    9177 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:13:13.705804    9177 cni.go:84] Creating CNI manager for ""
	I0503 15:13:13.705808    9177 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0503 15:13:13.705871    9177 start.go:340] cluster config:
	{Name:multinode-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-952000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:13:13.710421    9177 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:13.717491    9177 out.go:177] * Starting "multinode-952000" primary control-plane node in "multinode-952000" cluster
	I0503 15:13:13.721473    9177 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:13:13.721488    9177 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:13:13.721506    9177 cache.go:56] Caching tarball of preloaded images
	I0503 15:13:13.721565    9177 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:13:13.721571    9177 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:13:13.721630    9177 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/multinode-952000/config.json ...
	I0503 15:13:13.722086    9177 start.go:360] acquireMachinesLock for multinode-952000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:13:13.722119    9177 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "multinode-952000"
	I0503 15:13:13.722130    9177 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:13:13.722135    9177 fix.go:54] fixHost starting: 
	I0503 15:13:13.722259    9177 fix.go:112] recreateIfNeeded on multinode-952000: state=Stopped err=<nil>
	W0503 15:13:13.722266    9177 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:13:13.730494    9177 out.go:177] * Restarting existing qemu2 VM for "multinode-952000" ...
	I0503 15:13:13.734498    9177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:10:c3:10:46:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:13:13.736563    9177 main.go:141] libmachine: STDOUT: 
	I0503 15:13:13.736593    9177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:13:13.736620    9177 fix.go:56] duration metric: took 14.485958ms for fixHost
	I0503 15:13:13.736623    9177 start.go:83] releasing machines lock for "multinode-952000", held for 14.500208ms
	W0503 15:13:13.736632    9177 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:13:13.736670    9177 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:13.736675    9177 start.go:728] Will try again in 5 seconds ...
	I0503 15:13:18.738735    9177 start.go:360] acquireMachinesLock for multinode-952000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:13:18.739107    9177 start.go:364] duration metric: took 288.417µs to acquireMachinesLock for "multinode-952000"
	I0503 15:13:18.739604    9177 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:13:18.739627    9177 fix.go:54] fixHost starting: 
	I0503 15:13:18.740350    9177 fix.go:112] recreateIfNeeded on multinode-952000: state=Stopped err=<nil>
	W0503 15:13:18.740379    9177 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:13:18.748717    9177 out.go:177] * Restarting existing qemu2 VM for "multinode-952000" ...
	I0503 15:13:18.753052    9177 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:10:c3:10:46:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/multinode-952000/disk.qcow2
	I0503 15:13:18.762450    9177 main.go:141] libmachine: STDOUT: 
	I0503 15:13:18.762513    9177 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:13:18.762631    9177 fix.go:56] duration metric: took 23.002708ms for fixHost
	I0503 15:13:18.762648    9177 start.go:83] releasing machines lock for "multinode-952000", held for 23.519292ms
	W0503 15:13:18.762784    9177 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-952000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-952000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:18.768786    9177 out.go:177] 
	W0503 15:13:18.772850    9177 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:13:18.772880    9177 out.go:239] * 
	* 
	W0503 15:13:18.775381    9177 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:13:18.783739    9177 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-952000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (71.668333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-952000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-952000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-952000-m01 --driver=qemu2 : exit status 80 (9.9002715s)

                                                
                                                
-- stdout --
	* [multinode-952000-m01] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-952000-m01" primary control-plane node in "multinode-952000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-952000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-952000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-952000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-952000-m02 --driver=qemu2 : exit status 80 (9.9695295s)

                                                
                                                
-- stdout --
	* [multinode-952000-m02] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-952000-m02" primary control-plane node in "multinode-952000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-952000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-952000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-952000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-952000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-952000: exit status 83 (83.587958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-952000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-952000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-952000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-952000 -n multinode-952000: exit status 7 (32.358708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-952000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-957000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-957000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.836467291s)

                                                
                                                
-- stdout --
	* [test-preload-957000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-957000" primary control-plane node in "test-preload-957000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-957000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:13:39.166079    9244 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:13:39.166217    9244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:39.166220    9244 out.go:304] Setting ErrFile to fd 2...
	I0503 15:13:39.166223    9244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:13:39.166345    9244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:13:39.167400    9244 out.go:298] Setting JSON to false
	I0503 15:13:39.183444    9244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4390,"bootTime":1714770029,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:13:39.183512    9244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:13:39.189772    9244 out.go:177] * [test-preload-957000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:13:39.197747    9244 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:13:39.201794    9244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:13:39.197797    9244 notify.go:220] Checking for updates...
	I0503 15:13:39.208705    9244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:13:39.212717    9244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:13:39.215765    9244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:13:39.222704    9244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:13:39.227032    9244 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:13:39.227080    9244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:13:39.231712    9244 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:13:39.238600    9244 start.go:297] selected driver: qemu2
	I0503 15:13:39.238607    9244 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:13:39.238615    9244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:13:39.241018    9244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:13:39.243713    9244 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:13:39.246841    9244 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:13:39.246886    9244 cni.go:84] Creating CNI manager for ""
	I0503 15:13:39.246896    9244 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:13:39.246902    9244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:13:39.246948    9244 start.go:340] cluster config:
	{Name:test-preload-957000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-957000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:13:39.251496    9244 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.258712    9244 out.go:177] * Starting "test-preload-957000" primary control-plane node in "test-preload-957000" cluster
	I0503 15:13:39.262791    9244 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0503 15:13:39.262879    9244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/test-preload-957000/config.json ...
	I0503 15:13:39.262902    9244 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/test-preload-957000/config.json: {Name:mk1c059262e4408b169fba91409b421a5edd591a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:13:39.262932    9244 cache.go:107] acquiring lock: {Name:mke48e50e1b163c1693d62c6d4b46294eaaa0554 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.262952    9244 cache.go:107] acquiring lock: {Name:mk715d01b27ed2db2b95a3f299bb55abdfe80d26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.262953    9244 cache.go:107] acquiring lock: {Name:mka8651d350db303fa2030d72737dc753f5a7c75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.263001    9244 cache.go:107] acquiring lock: {Name:mk3e7d3d465fe804c39eef6f0647eb8931d26032 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.263015    9244 cache.go:107] acquiring lock: {Name:mk6046bf86f5bdc07039bf438394d2b54194fdfc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.263158    9244 cache.go:107] acquiring lock: {Name:mk95bbcd2e8e8c015a48536871a2e26bbb70aa86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.263181    9244 cache.go:107] acquiring lock: {Name:mk60cc5d7b7cec958b2a3bd216a36582c510fb96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.263273    9244 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0503 15:13:39.263275    9244 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:13:39.263288    9244 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0503 15:13:39.263366    9244 start.go:360] acquireMachinesLock for test-preload-957000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:13:39.263358    9244 cache.go:107] acquiring lock: {Name:mk131f018a9d4a5d0edc469bbee859fcd1c75500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:13:39.263426    9244 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0503 15:13:39.263446    9244 start.go:364] duration metric: took 50.167µs to acquireMachinesLock for "test-preload-957000"
	I0503 15:13:39.263503    9244 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0503 15:13:39.263504    9244 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:13:39.263504    9244 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:13:39.263462    9244 start.go:93] Provisioning new machine with config: &{Name:test-preload-957000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-957000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:13:39.263568    9244 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:13:39.267712    9244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:13:39.263661    9244 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0503 15:13:39.274678    9244 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0503 15:13:39.274829    9244 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0503 15:13:39.275730    9244 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0503 15:13:39.275799    9244 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:13:39.279565    9244 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0503 15:13:39.279658    9244 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:13:39.279721    9244 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0503 15:13:39.279831    9244 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:13:39.286127    9244 start.go:159] libmachine.API.Create for "test-preload-957000" (driver="qemu2")
	I0503 15:13:39.286146    9244 client.go:168] LocalClient.Create starting
	I0503 15:13:39.286227    9244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:13:39.286271    9244 main.go:141] libmachine: Decoding PEM data...
	I0503 15:13:39.286282    9244 main.go:141] libmachine: Parsing certificate...
	I0503 15:13:39.286328    9244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:13:39.286351    9244 main.go:141] libmachine: Decoding PEM data...
	I0503 15:13:39.286359    9244 main.go:141] libmachine: Parsing certificate...
	I0503 15:13:39.286643    9244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:13:39.434098    9244 main.go:141] libmachine: Creating SSH key...
	I0503 15:13:39.496668    9244 main.go:141] libmachine: Creating Disk image...
	I0503 15:13:39.496698    9244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:13:39.496882    9244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2
	I0503 15:13:39.510121    9244 main.go:141] libmachine: STDOUT: 
	I0503 15:13:39.510143    9244 main.go:141] libmachine: STDERR: 
	I0503 15:13:39.510195    9244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2 +20000M
	I0503 15:13:39.523257    9244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:13:39.523275    9244 main.go:141] libmachine: STDERR: 
	I0503 15:13:39.523288    9244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2
	I0503 15:13:39.523291    9244 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:13:39.523325    9244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:d5:38:23:01:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2
	I0503 15:13:39.525403    9244 main.go:141] libmachine: STDOUT: 
	I0503 15:13:39.525421    9244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:13:39.525439    9244 client.go:171] duration metric: took 239.293625ms to LocalClient.Create
	I0503 15:13:40.158012    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0503 15:13:40.195476    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0503 15:13:40.238115    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0503 15:13:40.266408    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0503 15:13:40.323707    9244 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0503 15:13:40.323778    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0503 15:13:40.350401    9244 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0503 15:13:40.350500    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0503 15:13:40.375539    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0503 15:13:40.375575    9244 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.11259975s
	I0503 15:13:40.375606    9244 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0503 15:13:40.387490    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0503 15:13:40.390024    9244 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0503 15:13:41.045426    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0503 15:13:41.045477    9244 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.782584458s
	I0503 15:13:41.045505    9244 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0503 15:13:41.525685    9244 start.go:128] duration metric: took 2.26214075s to createHost
	I0503 15:13:41.525741    9244 start.go:83] releasing machines lock for "test-preload-957000", held for 2.262338083s
	W0503 15:13:41.525819    9244 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:41.536183    9244 out.go:177] * Deleting "test-preload-957000" in qemu2 ...
	W0503 15:13:41.567167    9244 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:41.567202    9244 start.go:728] Will try again in 5 seconds ...
	I0503 15:13:42.915671    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0503 15:13:42.915723    9244 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.652643625s
	I0503 15:13:42.915753    9244 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0503 15:13:43.050139    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0503 15:13:43.050225    9244 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.787167166s
	I0503 15:13:43.050256    9244 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0503 15:13:43.755562    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0503 15:13:43.755666    9244 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.492827833s
	I0503 15:13:43.755696    9244 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0503 15:13:44.814262    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0503 15:13:44.814311    9244 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.551493958s
	I0503 15:13:44.814335    9244 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0503 15:13:45.927000    9244 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0503 15:13:45.927049    9244 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.663853375s
	I0503 15:13:45.927083    9244 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0503 15:13:46.567221    9244 start.go:360] acquireMachinesLock for test-preload-957000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:13:46.567656    9244 start.go:364] duration metric: took 347.917µs to acquireMachinesLock for "test-preload-957000"
	I0503 15:13:46.567758    9244 start.go:93] Provisioning new machine with config: &{Name:test-preload-957000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-957000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:13:46.568060    9244 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:13:46.579670    9244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:13:46.630295    9244 start.go:159] libmachine.API.Create for "test-preload-957000" (driver="qemu2")
	I0503 15:13:46.630345    9244 client.go:168] LocalClient.Create starting
	I0503 15:13:46.630456    9244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:13:46.630518    9244 main.go:141] libmachine: Decoding PEM data...
	I0503 15:13:46.630532    9244 main.go:141] libmachine: Parsing certificate...
	I0503 15:13:46.630621    9244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:13:46.630664    9244 main.go:141] libmachine: Decoding PEM data...
	I0503 15:13:46.630677    9244 main.go:141] libmachine: Parsing certificate...
	I0503 15:13:46.631178    9244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:13:46.786758    9244 main.go:141] libmachine: Creating SSH key...
	I0503 15:13:46.898707    9244 main.go:141] libmachine: Creating Disk image...
	I0503 15:13:46.898713    9244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:13:46.898903    9244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2
	I0503 15:13:46.911817    9244 main.go:141] libmachine: STDOUT: 
	I0503 15:13:46.911839    9244 main.go:141] libmachine: STDERR: 
	I0503 15:13:46.911889    9244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2 +20000M
	I0503 15:13:46.923077    9244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:13:46.923096    9244 main.go:141] libmachine: STDERR: 
	I0503 15:13:46.923106    9244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2
	I0503 15:13:46.923111    9244 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:13:46.923147    9244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:89:21:48:2e:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/test-preload-957000/disk.qcow2
	I0503 15:13:46.924938    9244 main.go:141] libmachine: STDOUT: 
	I0503 15:13:46.924956    9244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:13:46.924974    9244 client.go:171] duration metric: took 294.63175ms to LocalClient.Create
	I0503 15:13:48.925624    9244 start.go:128] duration metric: took 2.357548666s to createHost
	I0503 15:13:48.925686    9244 start.go:83] releasing machines lock for "test-preload-957000", held for 2.358061208s
	W0503 15:13:48.925908    9244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-957000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-957000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:13:48.936490    9244 out.go:177] 
	W0503 15:13:48.943605    9244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:13:48.943641    9244 out.go:239] * 
	* 
	W0503 15:13:48.946338    9244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:13:48.955504    9244 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-957000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-05-03 15:13:48.973996 -0700 PDT m=+663.214969293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-957000 -n test-preload-957000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-957000 -n test-preload-957000: exit status 7 (66.894542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-957000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-957000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-957000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (10.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-118000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-118000 --memory=2048 --driver=qemu2 : exit status 80 (9.976418834s)

                                                
                                                
-- stdout --
	* [scheduled-stop-118000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-118000" primary control-plane node in "scheduled-stop-118000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-118000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-118000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-118000" primary control-plane node in "scheduled-stop-118000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-118000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-05-03 15:13:59.124025 -0700 PDT m=+673.365230459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-118000 -n scheduled-stop-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-118000 -n scheduled-stop-118000: exit status 7 (73.8145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-118000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-118000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-118000
--- FAIL: TestScheduledStopUnix (10.16s)

                                                
                                    
x
+
TestSkaffold (12.57s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2754889707 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-143000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-143000 --memory=2600 --driver=qemu2 : exit status 80 (9.977941625s)

                                                
                                                
-- stdout --
	* [skaffold-143000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-143000" primary control-plane node in "skaffold-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-143000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-143000" primary control-plane node in "skaffold-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-05-03 15:14:11.700606 -0700 PDT m=+685.942100751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-143000 -n skaffold-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-143000 -n skaffold-143000: exit status 7 (65.304041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-143000
--- FAIL: TestSkaffold (12.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (592.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1350472335 start -p running-upgrade-916000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1350472335 start -p running-upgrade-916000 --memory=2200 --vm-driver=qemu2 : (52.81713975s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-916000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-916000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.654474667s)

                                                
                                                
-- stdout --
	* [running-upgrade-916000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-916000" primary control-plane node in "running-upgrade-916000" cluster
	* Updating the running qemu2 "running-upgrade-916000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:15:49.398068    9665 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:15:49.398209    9665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:15:49.398215    9665 out.go:304] Setting ErrFile to fd 2...
	I0503 15:15:49.398218    9665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:15:49.398346    9665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:15:49.399455    9665 out.go:298] Setting JSON to false
	I0503 15:15:49.416930    9665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4520,"bootTime":1714770029,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:15:49.416996    9665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:15:49.422307    9665 out.go:177] * [running-upgrade-916000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:15:49.430323    9665 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:15:49.430370    9665 notify.go:220] Checking for updates...
	I0503 15:15:49.435207    9665 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:15:49.439371    9665 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:15:49.442273    9665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:15:49.445301    9665 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:15:49.448319    9665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:15:49.450065    9665 config.go:182] Loaded profile config "running-upgrade-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:15:49.453245    9665 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0503 15:15:49.456276    9665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:15:49.460157    9665 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:15:49.467318    9665 start.go:297] selected driver: qemu2
	I0503 15:15:49.467325    9665 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:15:49.467386    9665 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:15:49.469876    9665 cni.go:84] Creating CNI manager for ""
	I0503 15:15:49.469896    9665 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:15:49.469927    9665 start.go:340] cluster config:
	{Name:running-upgrade-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:15:49.469976    9665 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:15:49.477328    9665 out.go:177] * Starting "running-upgrade-916000" primary control-plane node in "running-upgrade-916000" cluster
	I0503 15:15:49.482277    9665 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0503 15:15:49.482293    9665 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0503 15:15:49.482303    9665 cache.go:56] Caching tarball of preloaded images
	I0503 15:15:49.482365    9665 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:15:49.482370    9665 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0503 15:15:49.482423    9665 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/config.json ...
	I0503 15:15:49.482748    9665 start.go:360] acquireMachinesLock for running-upgrade-916000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:15:49.482781    9665 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "running-upgrade-916000"
	I0503 15:15:49.482789    9665 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:15:49.482794    9665 fix.go:54] fixHost starting: 
	I0503 15:15:49.483515    9665 fix.go:112] recreateIfNeeded on running-upgrade-916000: state=Running err=<nil>
	W0503 15:15:49.483523    9665 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:15:49.492233    9665 out.go:177] * Updating the running qemu2 "running-upgrade-916000" VM ...
	I0503 15:15:49.496280    9665 machine.go:94] provisionDockerMachine start ...
	I0503 15:15:49.496324    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:49.496452    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:49.496456    9665 main.go:141] libmachine: About to run SSH command:
	hostname
	I0503 15:15:49.545993    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-916000
	
	I0503 15:15:49.546005    9665 buildroot.go:166] provisioning hostname "running-upgrade-916000"
	I0503 15:15:49.546038    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:49.546143    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:49.546148    9665 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-916000 && echo "running-upgrade-916000" | sudo tee /etc/hostname
	I0503 15:15:49.599472    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-916000
	
	I0503 15:15:49.599524    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:49.599642    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:49.599650    9665 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-916000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-916000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-916000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0503 15:15:49.649119    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0503 15:15:49.649128    9665 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18793-7269/.minikube CaCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18793-7269/.minikube}
	I0503 15:15:49.649134    9665 buildroot.go:174] setting up certificates
	I0503 15:15:49.649147    9665 provision.go:84] configureAuth start
	I0503 15:15:49.649153    9665 provision.go:143] copyHostCerts
	I0503 15:15:49.649221    9665 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem, removing ...
	I0503 15:15:49.649227    9665 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem
	I0503 15:15:49.649362    9665 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem (1078 bytes)
	I0503 15:15:49.649545    9665 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem, removing ...
	I0503 15:15:49.649549    9665 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem
	I0503 15:15:49.649599    9665 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem (1123 bytes)
	I0503 15:15:49.649690    9665 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem, removing ...
	I0503 15:15:49.649693    9665 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem
	I0503 15:15:49.649737    9665 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem (1675 bytes)
	I0503 15:15:49.649826    9665 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-916000 san=[127.0.0.1 localhost minikube running-upgrade-916000]
	I0503 15:15:49.786942    9665 provision.go:177] copyRemoteCerts
	I0503 15:15:49.786980    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0503 15:15:49.786988    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:15:49.814733    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0503 15:15:49.821320    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0503 15:15:49.827635    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0503 15:15:49.834743    9665 provision.go:87] duration metric: took 185.593625ms to configureAuth
	I0503 15:15:49.834753    9665 buildroot.go:189] setting minikube options for container-runtime
	I0503 15:15:49.834859    9665 config.go:182] Loaded profile config "running-upgrade-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:15:49.834888    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:49.834981    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:49.834985    9665 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0503 15:15:49.883792    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0503 15:15:49.883801    9665 buildroot.go:70] root file system type: tmpfs
	I0503 15:15:49.883849    9665 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0503 15:15:49.883893    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:49.883995    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:49.884030    9665 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0503 15:15:49.936035    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0503 15:15:49.936081    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:49.936184    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:49.936192    9665 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0503 15:15:49.986643    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0503 15:15:49.986656    9665 machine.go:97] duration metric: took 490.382291ms to provisionDockerMachine
	I0503 15:15:49.986662    9665 start.go:293] postStartSetup for "running-upgrade-916000" (driver="qemu2")
	I0503 15:15:49.986668    9665 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0503 15:15:49.986723    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0503 15:15:49.986731    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:15:50.015298    9665 ssh_runner.go:195] Run: cat /etc/os-release
	I0503 15:15:50.016627    9665 info.go:137] Remote host: Buildroot 2021.02.12
	I0503 15:15:50.016633    9665 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18793-7269/.minikube/addons for local assets ...
	I0503 15:15:50.016711    9665 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18793-7269/.minikube/files for local assets ...
	I0503 15:15:50.016823    9665 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem -> 77682.pem in /etc/ssl/certs
	I0503 15:15:50.016941    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0503 15:15:50.019948    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem --> /etc/ssl/certs/77682.pem (1708 bytes)
	I0503 15:15:50.027046    9665 start.go:296] duration metric: took 40.378625ms for postStartSetup
	I0503 15:15:50.027061    9665 fix.go:56] duration metric: took 544.279542ms for fixHost
	I0503 15:15:50.027103    9665 main.go:141] libmachine: Using SSH client type: native
	I0503 15:15:50.027228    9665 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b5dc80] 0x100b604e0 <nil>  [] 0s} localhost 51156 <nil> <nil>}
	I0503 15:15:50.027232    9665 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0503 15:15:50.075379    9665 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714774549.979660263
	
	I0503 15:15:50.075389    9665 fix.go:216] guest clock: 1714774549.979660263
	I0503 15:15:50.075393    9665 fix.go:229] Guest: 2024-05-03 15:15:49.979660263 -0700 PDT Remote: 2024-05-03 15:15:50.027063 -0700 PDT m=+0.651739626 (delta=-47.402737ms)
	I0503 15:15:50.075404    9665 fix.go:200] guest clock delta is within tolerance: -47.402737ms
	I0503 15:15:50.075423    9665 start.go:83] releasing machines lock for "running-upgrade-916000", held for 592.649584ms
	I0503 15:15:50.075494    9665 ssh_runner.go:195] Run: cat /version.json
	I0503 15:15:50.075505    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:15:50.075494    9665 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0503 15:15:50.075527    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	W0503 15:15:50.076133    9665 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51156: connect: connection refused
	I0503 15:15:50.076158    9665 retry.go:31] will retry after 141.174191ms: dial tcp [::1]:51156: connect: connection refused
	W0503 15:15:50.099792    9665 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0503 15:15:50.099837    9665 ssh_runner.go:195] Run: systemctl --version
	I0503 15:15:50.101640    9665 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0503 15:15:50.103319    9665 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0503 15:15:50.103348    9665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0503 15:15:50.106295    9665 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0503 15:15:50.110601    9665 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0503 15:15:50.110608    9665 start.go:494] detecting cgroup driver to use...
	I0503 15:15:50.110710    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 15:15:50.115521    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0503 15:15:50.119129    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0503 15:15:50.122151    9665 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0503 15:15:50.122174    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0503 15:15:50.125042    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 15:15:50.128204    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0503 15:15:50.131801    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 15:15:50.135359    9665 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0503 15:15:50.138425    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0503 15:15:50.141163    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0503 15:15:50.144234    9665 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0503 15:15:50.147469    9665 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0503 15:15:50.150448    9665 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0503 15:15:50.152980    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:15:50.248000    9665 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0503 15:15:50.258023    9665 start.go:494] detecting cgroup driver to use...
	I0503 15:15:50.258101    9665 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0503 15:15:50.264391    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0503 15:15:50.424862    9665 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0503 15:15:50.448827    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0503 15:15:50.454150    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0503 15:15:50.458771    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 15:15:50.464114    9665 ssh_runner.go:195] Run: which cri-dockerd
	I0503 15:15:50.465326    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0503 15:15:50.468203    9665 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0503 15:15:50.473101    9665 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0503 15:15:50.566067    9665 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0503 15:15:50.660119    9665 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0503 15:15:50.660189    9665 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0503 15:15:50.665115    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:15:50.753448    9665 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0503 15:15:53.487678    9665 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.734274875s)
	I0503 15:15:53.487746    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0503 15:15:53.492634    9665 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0503 15:15:53.499415    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0503 15:15:53.504898    9665 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0503 15:15:53.588471    9665 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0503 15:15:53.666503    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:15:53.744208    9665 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0503 15:15:53.750406    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0503 15:15:53.754497    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:15:53.844865    9665 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0503 15:15:53.884720    9665 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0503 15:15:53.884795    9665 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0503 15:15:53.887040    9665 start.go:562] Will wait 60s for crictl version
	I0503 15:15:53.887094    9665 ssh_runner.go:195] Run: which crictl
	I0503 15:15:53.888442    9665 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0503 15:15:53.900490    9665 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0503 15:15:53.900566    9665 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0503 15:15:53.913173    9665 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0503 15:15:53.934835    9665 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0503 15:15:53.934901    9665 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0503 15:15:53.936394    9665 kubeadm.go:877] updating cluster {Name:running-upgrade-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0503 15:15:53.936434    9665 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0503 15:15:53.936472    9665 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0503 15:15:53.947412    9665 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0503 15:15:53.947421    9665 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0503 15:15:53.947465    9665 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0503 15:15:53.950843    9665 ssh_runner.go:195] Run: which lz4
	I0503 15:15:53.952118    9665 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0503 15:15:53.953377    9665 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0503 15:15:53.953386    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0503 15:15:54.635345    9665 docker.go:649] duration metric: took 683.273166ms to copy over tarball
	I0503 15:15:54.635399    9665 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0503 15:15:55.795398    9665 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.160013667s)
	I0503 15:15:55.795422    9665 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0503 15:15:55.811607    9665 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0503 15:15:55.815158    9665 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0503 15:15:55.820322    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:15:55.905328    9665 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0503 15:15:57.128191    9665 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.2228745s)
	I0503 15:15:57.128289    9665 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0503 15:15:57.146296    9665 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0503 15:15:57.146306    9665 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0503 15:15:57.146312    9665 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0503 15:15:57.153155    9665 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:15:57.153224    9665 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:15:57.153262    9665 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:15:57.153298    9665 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:15:57.153430    9665 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0503 15:15:57.153483    9665 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:15:57.153541    9665 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:15:57.153785    9665 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:15:57.163866    9665 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:15:57.163963    9665 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0503 15:15:57.164027    9665 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:15:57.164085    9665 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:15:57.164801    9665 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:15:57.164841    9665 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:15:57.164864    9665 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:15:57.164953    9665 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0503 15:15:57.999271    9665 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0503 15:15:57.999918    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:15:58.045388    9665 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0503 15:15:58.045460    9665 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:15:58.045553    9665 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:15:58.147707    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:15:58.153232    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:15:58.165633    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:15:58.218770    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:15:58.255954    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0503 15:15:58.258716    9665 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0503 15:15:58.258821    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:15:58.275528    9665 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0503 15:15:59.085394    9665 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.039796625s)
	I0503 15:15:59.085446    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0503 15:15:59.085513    9665 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0503 15:15:59.085568    9665 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:15:59.085708    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:15:59.085849    9665 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0503 15:15:59.086003    9665 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0503 15:15:59.086031    9665 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:15:59.086089    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:15:59.086331    9665 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0503 15:15:59.086374    9665 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:15:59.086420    9665 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0503 15:15:59.086448    9665 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:15:59.086458    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:15:59.086505    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:15:59.086593    9665 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0503 15:15:59.086616    9665 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:15:59.086636    9665 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0503 15:15:59.086668    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0503 15:15:59.086669    9665 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:15:59.086749    9665 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0503 15:15:59.086772    9665 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0503 15:15:59.086826    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0503 15:15:59.086751    9665 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:15:59.159169    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0503 15:15:59.159181    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0503 15:15:59.159250    9665 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0503 15:15:59.159268    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0503 15:15:59.162049    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0503 15:15:59.169138    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0503 15:15:59.169157    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0503 15:15:59.169140    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0503 15:15:59.169187    9665 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0503 15:15:59.169248    9665 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0503 15:15:59.169280    9665 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0503 15:15:59.173516    9665 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0503 15:15:59.173538    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0503 15:15:59.178585    9665 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0503 15:15:59.178613    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0503 15:15:59.189749    9665 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0503 15:15:59.189764    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0503 15:15:59.254753    9665 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0503 15:15:59.254774    9665 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0503 15:15:59.254789    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0503 15:15:59.497579    9665 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0503 15:15:59.497600    9665 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0503 15:15:59.497607    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0503 15:15:59.535100    9665 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0503 15:15:59.535135    9665 cache_images.go:92] duration metric: took 2.388871875s to LoadCachedImages
	W0503 15:15:59.535180    9665 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0503 15:15:59.535186    9665 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0503 15:15:59.535247    9665 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-916000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0503 15:15:59.535315    9665 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0503 15:15:59.548669    9665 cni.go:84] Creating CNI manager for ""
	I0503 15:15:59.548680    9665 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:15:59.548684    9665 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0503 15:15:59.548695    9665 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-916000 NodeName:running-upgrade-916000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0503 15:15:59.548751    9665 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-916000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0503 15:15:59.548794    9665 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0503 15:15:59.551822    9665 binaries.go:44] Found k8s binaries, skipping transfer
	I0503 15:15:59.551856    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0503 15:15:59.555153    9665 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0503 15:15:59.560059    9665 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0503 15:15:59.565248    9665 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0503 15:15:59.570433    9665 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0503 15:15:59.571711    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:15:59.648332    9665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:15:59.653569    9665 certs.go:68] Setting up /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000 for IP: 10.0.2.15
	I0503 15:15:59.653575    9665 certs.go:194] generating shared ca certs ...
	I0503 15:15:59.653583    9665 certs.go:226] acquiring lock for ca certs: {Name:mkd5f7db20634f49dfd68d117c1845d0b32f87c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:15:59.653828    9665 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.key
	I0503 15:15:59.653874    9665 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.key
	I0503 15:15:59.653879    9665 certs.go:256] generating profile certs ...
	I0503 15:15:59.653970    9665 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.key
	I0503 15:15:59.653984    9665 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.key.1b0d04df
	I0503 15:15:59.653996    9665 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.crt.1b0d04df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0503 15:15:59.727491    9665 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.crt.1b0d04df ...
	I0503 15:15:59.727497    9665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.crt.1b0d04df: {Name:mkc25e360ad6e0febe4d38df0bf0472b54162b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:15:59.727759    9665 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.key.1b0d04df ...
	I0503 15:15:59.727764    9665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.key.1b0d04df: {Name:mkdce7947736272439337895fc266a2aa61eaf3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:15:59.727904    9665 certs.go:381] copying /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.crt.1b0d04df -> /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.crt
	I0503 15:15:59.728042    9665 certs.go:385] copying /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.key.1b0d04df -> /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.key
	I0503 15:15:59.728187    9665 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/proxy-client.key
	I0503 15:15:59.728307    9665 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768.pem (1338 bytes)
	W0503 15:15:59.728336    9665 certs.go:480] ignoring /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768_empty.pem, impossibly tiny 0 bytes
	I0503 15:15:59.728341    9665 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem (1675 bytes)
	I0503 15:15:59.728367    9665 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem (1078 bytes)
	I0503 15:15:59.728394    9665 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem (1123 bytes)
	I0503 15:15:59.728418    9665 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem (1675 bytes)
	I0503 15:15:59.728468    9665 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem (1708 bytes)
	I0503 15:15:59.728807    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0503 15:15:59.736545    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0503 15:15:59.743902    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0503 15:15:59.751181    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0503 15:15:59.758177    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0503 15:15:59.765248    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0503 15:15:59.772583    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0503 15:15:59.779862    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0503 15:15:59.787334    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0503 15:15:59.794334    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768.pem --> /usr/share/ca-certificates/7768.pem (1338 bytes)
	I0503 15:15:59.801176    9665 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem --> /usr/share/ca-certificates/77682.pem (1708 bytes)
	I0503 15:15:59.807955    9665 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0503 15:15:59.813297    9665 ssh_runner.go:195] Run: openssl version
	I0503 15:15:59.815138    9665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7768.pem && ln -fs /usr/share/ca-certificates/7768.pem /etc/ssl/certs/7768.pem"
	I0503 15:15:59.818289    9665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7768.pem
	I0503 15:15:59.819747    9665 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  3 22:03 /usr/share/ca-certificates/7768.pem
	I0503 15:15:59.819762    9665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7768.pem
	I0503 15:15:59.821744    9665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7768.pem /etc/ssl/certs/51391683.0"
	I0503 15:15:59.824323    9665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77682.pem && ln -fs /usr/share/ca-certificates/77682.pem /etc/ssl/certs/77682.pem"
	I0503 15:15:59.827871    9665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77682.pem
	I0503 15:15:59.829656    9665 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  3 22:03 /usr/share/ca-certificates/77682.pem
	I0503 15:15:59.829686    9665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77682.pem
	I0503 15:15:59.831646    9665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77682.pem /etc/ssl/certs/3ec20f2e.0"
	I0503 15:15:59.834902    9665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0503 15:15:59.837798    9665 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:15:59.839334    9665 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  3 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:15:59.839351    9665 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:15:59.841175    9665 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0503 15:15:59.844170    9665 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0503 15:15:59.845678    9665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0503 15:15:59.847627    9665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0503 15:15:59.849513    9665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0503 15:15:59.851233    9665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0503 15:15:59.853187    9665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0503 15:15:59.854952    9665 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0503 15:15:59.856811    9665 kubeadm.go:391] StartCluster: {Name:running-upgrade-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:15:59.856884    9665 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0503 15:15:59.868269    9665 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0503 15:15:59.871972    9665 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0503 15:15:59.871980    9665 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0503 15:15:59.871983    9665 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0503 15:15:59.872003    9665 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0503 15:15:59.875073    9665 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:15:59.875108    9665 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-916000" does not appear in /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:15:59.875122    9665 kubeconfig.go:62] /Users/jenkins/minikube-integration/18793-7269/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-916000" cluster setting kubeconfig missing "running-upgrade-916000" context setting]
	I0503 15:15:59.875329    9665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:15:59.876237    9665 kapi.go:59] client config for running-upgrade-916000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eefcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:15:59.877062    9665 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0503 15:15:59.880124    9665 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-916000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0503 15:15:59.880130    9665 kubeadm.go:1154] stopping kube-system containers ...
	I0503 15:15:59.880165    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0503 15:15:59.892596    9665 docker.go:483] Stopping containers: [7cef87537b66 a9733a3e0a7b b4722a61c7cd dfd238b8080d c33b5f027877 f90094320501 e60f4d155911 70922078f849 52384ec84857 a56a770c48ce c58ec9465be1 85a3005ff36d d81f788f043e a0e498b417fc]
	I0503 15:15:59.892675    9665 ssh_runner.go:195] Run: docker stop 7cef87537b66 a9733a3e0a7b b4722a61c7cd dfd238b8080d c33b5f027877 f90094320501 e60f4d155911 70922078f849 52384ec84857 a56a770c48ce c58ec9465be1 85a3005ff36d d81f788f043e a0e498b417fc
	I0503 15:15:59.904071    9665 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0503 15:16:00.005378    9665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:16:00.009878    9665 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 May  3 22:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May  3 22:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 May  3 22:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 May  3 22:15 /etc/kubernetes/scheduler.conf
	
	I0503 15:16:00.009922    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/admin.conf
	I0503 15:16:00.013469    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:16:00.013501    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:16:00.016839    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/kubelet.conf
	I0503 15:16:00.020097    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:16:00.020122    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:16:00.023758    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/controller-manager.conf
	I0503 15:16:00.027200    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:16:00.027220    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:16:00.030154    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/scheduler.conf
	I0503 15:16:00.032798    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:16:00.032819    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:16:00.035848    9665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:16:00.038690    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:16:00.079083    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:16:00.607805    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:16:00.827884    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:16:00.876080    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:16:00.899070    9665 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:16:00.899160    9665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:16:01.401358    9665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:16:01.901252    9665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:16:01.906631    9665 api_server.go:72] duration metric: took 1.007585125s to wait for apiserver process to appear ...
	I0503 15:16:01.906643    9665 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:16:01.906653    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:06.908664    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:06.908712    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:11.909092    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:11.909218    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:16.909894    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:16.909931    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:21.910646    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:21.910740    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:26.912161    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:26.912279    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:31.913978    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:31.914072    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:36.916201    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:36.916288    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:41.918826    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:41.918938    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:46.921438    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:46.921521    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:51.924070    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:51.924156    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:16:56.926735    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:16:56.926820    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:01.929239    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:01.929713    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:01.966495    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:01.966653    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:01.987690    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:01.987803    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:02.002895    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:02.002970    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:02.015319    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:02.015386    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:02.026130    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:02.026199    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:02.036229    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:02.036294    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:02.046286    9665 logs.go:276] 0 containers: []
	W0503 15:17:02.046297    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:02.046354    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:02.056696    9665 logs.go:276] 0 containers: []
	W0503 15:17:02.056705    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:02.056714    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:02.056719    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:02.070512    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:02.070525    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:02.089341    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:02.089354    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:02.103716    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:02.103727    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:02.115285    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:02.115295    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:02.154398    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:02.154407    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:02.178430    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:02.178440    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:02.190470    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:02.190481    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:02.210361    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:02.210371    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:02.226770    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:02.226783    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:02.231102    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:02.231112    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:02.299727    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:02.299741    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:02.313441    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:02.313451    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:02.324764    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:02.324774    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:02.336171    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:02.336180    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:04.862203    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:09.864109    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:09.864560    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:09.903412    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:09.903541    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:09.924477    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:09.924559    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:09.940118    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:09.940192    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:09.952815    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:09.952886    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:09.963982    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:09.964052    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:09.975021    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:09.975091    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:09.985420    9665 logs.go:276] 0 containers: []
	W0503 15:17:09.985431    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:09.985488    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:09.995766    9665 logs.go:276] 0 containers: []
	W0503 15:17:09.995777    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:09.995785    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:09.995790    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:10.012850    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:10.012863    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:10.038475    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:10.038482    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:10.052611    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:10.052623    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:10.064353    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:10.064363    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:10.077613    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:10.077624    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:10.096591    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:10.096603    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:10.107886    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:10.107899    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:10.119118    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:10.119131    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:10.132971    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:10.132981    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:10.146483    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:10.146496    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:10.158188    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:10.158203    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:10.162883    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:10.162890    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:10.198660    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:10.198677    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:10.216499    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:10.216509    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:12.756150    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:17.758842    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:17.759257    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:17.798507    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:17.798636    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:17.819085    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:17.819195    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:17.834245    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:17.834316    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:17.847626    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:17.847695    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:17.857646    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:17.857714    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:17.868501    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:17.868569    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:17.884820    9665 logs.go:276] 0 containers: []
	W0503 15:17:17.884831    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:17.884891    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:17.895675    9665 logs.go:276] 0 containers: []
	W0503 15:17:17.895687    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:17.895704    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:17.895713    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:17.930534    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:17.930549    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:17.942661    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:17.942687    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:17.955221    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:17.955235    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:17.968836    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:17.968849    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:17.982260    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:17.982273    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:18.022594    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:18.022603    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:18.036734    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:18.036744    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:18.059146    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:18.059157    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:18.074159    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:18.074170    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:18.088077    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:18.088089    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:18.111924    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:18.111931    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:18.115919    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:18.115926    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:18.130709    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:18.130720    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:18.148145    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:18.148157    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:20.666287    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:25.669109    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:25.669497    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:25.700077    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:25.700212    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:25.719963    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:25.720062    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:25.734033    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:25.734106    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:25.745591    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:25.745661    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:25.756077    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:25.756147    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:25.766411    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:25.766483    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:25.776295    9665 logs.go:276] 0 containers: []
	W0503 15:17:25.776305    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:25.776363    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:25.799147    9665 logs.go:276] 0 containers: []
	W0503 15:17:25.799159    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:25.799167    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:25.799172    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:25.833501    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:25.833524    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:25.853310    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:25.853320    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:25.877653    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:25.877661    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:25.882389    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:25.882396    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:25.895520    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:25.895532    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:25.932938    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:25.932947    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:25.950656    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:25.950668    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:25.961999    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:25.962008    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:25.976380    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:25.976392    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:25.993445    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:25.993455    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:26.012362    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:26.012375    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:26.032541    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:26.032553    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:26.049532    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:26.049543    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:26.061380    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:26.061392    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:28.577003    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:33.579753    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:33.580172    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:33.621960    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:33.622102    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:33.644395    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:33.644515    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:33.659679    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:33.659757    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:33.672217    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:33.672285    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:33.683022    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:33.683092    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:33.695365    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:33.695438    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:33.705227    9665 logs.go:276] 0 containers: []
	W0503 15:17:33.705238    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:33.705298    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:33.715349    9665 logs.go:276] 0 containers: []
	W0503 15:17:33.715361    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:33.715369    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:33.715381    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:33.734642    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:33.734653    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:33.746035    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:33.746048    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:33.759348    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:33.759360    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:33.784130    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:33.784138    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:33.788804    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:33.788813    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:33.802762    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:33.802774    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:33.816404    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:33.816417    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:33.834523    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:33.834535    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:33.845676    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:33.845688    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:33.866143    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:33.866158    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:33.906271    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:33.906280    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:33.943085    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:33.943100    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:33.957777    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:33.957790    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:33.969706    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:33.969720    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:36.484124    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:41.486854    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:41.487229    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:41.519234    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:41.519407    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:41.538635    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:41.538730    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:41.554543    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:41.554621    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:41.566695    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:41.566778    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:41.577433    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:41.577502    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:41.587886    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:41.587951    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:41.597805    9665 logs.go:276] 0 containers: []
	W0503 15:17:41.597816    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:41.597875    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:41.608100    9665 logs.go:276] 0 containers: []
	W0503 15:17:41.608111    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:41.608119    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:41.608127    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:41.645496    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:41.645504    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:41.680222    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:41.680233    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:41.693852    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:41.693864    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:41.719342    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:41.719352    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:41.730758    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:41.730772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:41.744518    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:41.744530    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:41.766248    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:41.766257    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:41.779087    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:41.779101    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:41.804309    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:41.804319    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:41.815745    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:41.815757    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:41.829989    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:41.830004    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:41.847221    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:41.847229    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:41.851419    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:41.851424    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:41.862821    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:41.862835    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:44.376219    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:49.378927    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:49.379418    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:49.411685    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:49.411815    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:49.433665    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:49.433757    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:49.449101    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:49.449183    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:49.461805    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:49.461878    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:49.472457    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:49.472520    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:49.482972    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:49.483033    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:49.493242    9665 logs.go:276] 0 containers: []
	W0503 15:17:49.493253    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:49.493302    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:49.503176    9665 logs.go:276] 0 containers: []
	W0503 15:17:49.503186    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:49.503195    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:49.503201    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:49.507402    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:49.507408    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:49.522270    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:49.522280    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:49.534010    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:49.534020    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:49.547517    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:49.547526    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:49.571054    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:49.571061    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:49.604480    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:49.604496    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:49.623846    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:49.623854    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:17:49.635048    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:49.635059    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:49.647171    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:49.647180    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:49.658464    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:49.658475    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:49.697651    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:49.697659    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:49.721420    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:49.721430    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:49.744936    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:49.744946    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:49.765720    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:49.765732    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:52.282398    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:17:57.283933    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:17:57.284126    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:17:57.302029    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:17:57.302116    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:17:57.315742    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:17:57.315817    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:17:57.328229    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:17:57.328296    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:17:57.344589    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:17:57.344651    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:17:57.354933    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:17:57.354994    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:17:57.365467    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:17:57.365531    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:17:57.375176    9665 logs.go:276] 0 containers: []
	W0503 15:17:57.375186    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:17:57.375236    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:17:57.385367    9665 logs.go:276] 0 containers: []
	W0503 15:17:57.385381    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:17:57.385388    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:17:57.385393    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:17:57.403948    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:17:57.403959    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:17:57.417645    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:17:57.417657    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:17:57.435224    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:17:57.435237    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:17:57.447101    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:17:57.447113    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:17:57.485814    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:17:57.485825    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:17:57.499466    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:17:57.499479    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:17:57.516680    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:17:57.516693    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:17:57.521373    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:17:57.521381    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:17:57.535140    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:17:57.535150    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:17:57.549629    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:17:57.549641    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:17:57.588732    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:17:57.588739    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:17:57.600930    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:17:57.600944    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:17:57.612515    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:17:57.612528    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:17:57.637049    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:17:57.637056    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:00.147930    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:05.149438    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:05.149658    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:05.181312    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:05.181436    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:05.202488    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:05.202596    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:05.218321    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:05.218397    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:05.230628    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:05.230694    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:05.241338    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:05.241404    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:05.252088    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:05.252155    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:05.262037    9665 logs.go:276] 0 containers: []
	W0503 15:18:05.262051    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:05.262103    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:05.272642    9665 logs.go:276] 0 containers: []
	W0503 15:18:05.272653    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:05.272660    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:05.272667    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:05.293312    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:05.293324    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:05.306805    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:05.306817    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:05.345843    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:05.345851    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:05.350679    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:05.350688    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:05.390303    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:05.390331    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:05.409019    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:05.409029    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:05.428931    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:05.428944    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:05.448901    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:05.448913    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:05.473179    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:05.473188    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:05.485707    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:05.485720    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:05.500301    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:05.500314    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:05.517462    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:05.517474    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:05.528782    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:05.528795    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:05.542288    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:05.542300    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:08.061695    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:13.064349    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:13.064501    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:13.086458    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:13.086551    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:13.104787    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:13.104860    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:13.116766    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:13.116832    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:13.128201    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:13.128266    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:13.139532    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:13.139609    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:13.150746    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:13.150816    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:13.161122    9665 logs.go:276] 0 containers: []
	W0503 15:18:13.161133    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:13.161194    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:13.172066    9665 logs.go:276] 0 containers: []
	W0503 15:18:13.172077    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:13.172085    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:13.172095    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:13.184875    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:13.184887    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:13.206940    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:13.206954    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:13.221498    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:13.221509    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:13.241350    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:13.241368    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:13.282628    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:13.282659    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:13.288229    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:13.288247    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:13.315510    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:13.315526    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:13.342864    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:13.342885    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:13.382158    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:13.382170    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:13.397300    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:13.397314    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:13.416661    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:13.416672    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:13.430131    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:13.430145    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:13.444275    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:13.444289    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:13.469265    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:13.469275    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:15.991165    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:20.993829    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:20.993966    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:21.005340    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:21.005412    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:21.016653    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:21.016728    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:21.027336    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:21.027400    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:21.037961    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:21.038027    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:21.048167    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:21.048232    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:21.060228    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:21.060294    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:21.070144    9665 logs.go:276] 0 containers: []
	W0503 15:18:21.070156    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:21.070207    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:21.080114    9665 logs.go:276] 0 containers: []
	W0503 15:18:21.080124    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:21.080134    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:21.080144    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:21.100363    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:21.100377    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:21.114470    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:21.114484    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:21.127820    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:21.127831    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:21.166896    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:21.166916    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:21.179523    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:21.179535    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:21.197821    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:21.197833    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:21.223510    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:21.223519    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:21.236633    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:21.236644    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:21.241981    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:21.241988    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:21.278094    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:21.278107    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:21.296261    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:21.296273    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:21.310751    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:21.310764    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:21.323521    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:21.323533    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:21.341169    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:21.341180    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:23.856963    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:28.859252    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:28.859683    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:28.895666    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:28.895804    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:28.916236    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:28.916339    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:28.930842    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:28.930919    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:28.942853    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:28.942925    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:28.953349    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:28.953421    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:28.963767    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:28.963842    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:28.975770    9665 logs.go:276] 0 containers: []
	W0503 15:18:28.975783    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:28.975843    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:28.986015    9665 logs.go:276] 0 containers: []
	W0503 15:18:28.986027    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:28.986035    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:28.986041    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:29.004118    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:29.004129    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:29.017688    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:29.017702    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:29.041003    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:29.041014    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:29.056270    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:29.056282    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:29.060923    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:29.060933    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:29.078642    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:29.078655    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:29.089860    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:29.089871    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:29.108612    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:29.108622    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:29.131925    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:29.131933    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:29.143490    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:29.143504    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:29.177326    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:29.177339    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:29.191706    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:29.191719    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:29.205972    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:29.205982    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:29.219487    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:29.219496    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:31.762257    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:36.764794    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:36.764940    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:36.779019    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:36.779112    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:36.791469    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:36.791545    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:36.801920    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:36.801988    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:36.812348    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:36.812416    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:36.822730    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:36.822799    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:36.833208    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:36.833267    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:36.847218    9665 logs.go:276] 0 containers: []
	W0503 15:18:36.847230    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:36.847289    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:36.857834    9665 logs.go:276] 0 containers: []
	W0503 15:18:36.857859    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:36.857868    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:36.857875    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:36.862509    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:36.862520    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:36.876276    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:36.876288    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:36.890318    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:36.890331    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:36.907687    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:36.907697    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:36.926687    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:36.926701    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:36.944545    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:36.944556    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:36.958235    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:36.958246    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:36.970235    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:36.970246    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:36.983408    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:36.983419    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:37.006623    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:37.006629    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:37.044090    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:37.044098    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:37.080005    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:37.080019    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:37.091150    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:37.091160    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:37.102842    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:37.102853    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:39.616653    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:44.618947    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:44.619331    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:44.654778    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:44.654913    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:44.675246    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:44.675330    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:44.689718    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:44.689795    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:44.702235    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:44.702312    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:44.712471    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:44.712544    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:44.722554    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:44.722617    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:44.732597    9665 logs.go:276] 0 containers: []
	W0503 15:18:44.732609    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:44.732667    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:44.742590    9665 logs.go:276] 0 containers: []
	W0503 15:18:44.742600    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:44.742607    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:44.742612    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:44.779999    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:44.780005    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:44.784282    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:44.784288    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:44.803718    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:44.803729    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:44.815104    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:44.815114    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:44.829070    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:44.829082    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:44.849814    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:44.849824    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:44.867597    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:44.867607    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:44.878417    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:44.878428    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:44.892009    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:44.892022    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:44.905788    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:44.905802    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:44.923443    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:44.923454    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:44.948320    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:44.948329    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:44.983238    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:44.983251    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:44.997136    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:44.997147    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:47.511492    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:18:52.513615    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:18:52.513730    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:18:52.525996    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:18:52.526062    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:18:52.540321    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:18:52.540399    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:18:52.551982    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:18:52.552050    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:18:52.568452    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:18:52.568530    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:18:52.580592    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:18:52.580688    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:18:52.592386    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:18:52.592463    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:18:52.603430    9665 logs.go:276] 0 containers: []
	W0503 15:18:52.603442    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:18:52.603505    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:18:52.614861    9665 logs.go:276] 0 containers: []
	W0503 15:18:52.614874    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:18:52.614883    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:18:52.614889    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:18:52.628319    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:18:52.628333    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:18:52.647819    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:18:52.647832    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:18:52.673144    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:18:52.673153    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:18:52.685997    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:18:52.686011    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:18:52.727320    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:18:52.727333    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:18:52.743390    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:18:52.743401    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:18:52.764198    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:18:52.764213    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:18:52.777360    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:18:52.777372    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:18:52.793658    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:18:52.793668    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:18:52.798237    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:18:52.798245    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:18:52.818825    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:18:52.818839    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:18:52.838968    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:18:52.838982    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:18:52.862207    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:18:52.862221    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:18:52.901097    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:18:52.901112    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:18:55.421935    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:00.424577    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:00.424930    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:00.488977    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:00.489060    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:00.508602    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:00.508675    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:00.519462    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:00.519525    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:00.529571    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:00.529644    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:00.540307    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:00.540371    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:00.551215    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:00.551278    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:00.561930    9665 logs.go:276] 0 containers: []
	W0503 15:19:00.561940    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:00.562001    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:00.572536    9665 logs.go:276] 0 containers: []
	W0503 15:19:00.572546    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:00.572554    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:00.572559    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:00.598314    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:00.598329    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:00.623022    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:00.623030    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:00.640629    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:00.640638    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:00.652297    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:00.652307    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:00.691730    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:00.691738    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:00.695779    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:00.695787    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:00.709495    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:00.709507    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:00.721306    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:00.721319    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:00.734985    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:00.734997    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:00.748390    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:00.748404    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:00.786916    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:00.786932    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:00.801914    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:00.801928    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:00.824194    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:00.824205    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:00.835694    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:00.835707    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:03.353798    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:08.356284    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:08.356772    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:08.396751    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:08.396892    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:08.419282    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:08.419393    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:08.434744    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:08.434816    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:08.447991    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:08.448061    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:08.458631    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:08.458691    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:08.469026    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:08.469097    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:08.479431    9665 logs.go:276] 0 containers: []
	W0503 15:19:08.479441    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:08.479498    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:08.489524    9665 logs.go:276] 0 containers: []
	W0503 15:19:08.489535    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:08.489543    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:08.489548    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:08.503269    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:08.503279    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:08.542319    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:08.542326    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:08.546440    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:08.546448    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:08.561055    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:08.561067    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:08.572857    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:08.572869    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:08.586977    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:08.586987    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:08.610725    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:08.610731    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:08.622453    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:08.622464    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:08.658106    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:08.658115    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:08.683271    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:08.683283    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:08.695229    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:08.695244    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:08.706446    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:08.706460    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:08.723461    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:08.723474    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:08.737157    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:08.737183    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:11.256734    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:16.259010    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:16.259117    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:16.270412    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:16.270484    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:16.281265    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:16.281348    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:16.292256    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:16.292322    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:16.303226    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:16.303287    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:16.313715    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:16.313800    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:16.324228    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:16.324293    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:16.334525    9665 logs.go:276] 0 containers: []
	W0503 15:19:16.334536    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:16.334584    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:16.345488    9665 logs.go:276] 0 containers: []
	W0503 15:19:16.345497    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:16.345504    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:16.345512    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:16.363074    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:16.363085    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:16.374732    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:16.374742    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:16.386969    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:16.386979    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:16.404472    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:16.404482    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:16.419342    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:16.419353    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:16.432530    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:16.432540    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:16.471986    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:16.471994    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:16.498809    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:16.498819    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:16.517804    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:16.517814    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:16.529760    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:16.529772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:16.544593    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:16.544603    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:16.549033    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:16.549040    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:16.587149    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:16.587161    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:16.610545    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:16.610553    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:19.125908    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:24.128191    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:24.128558    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:24.161825    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:24.161950    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:24.180772    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:24.180876    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:24.195053    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:24.195124    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:24.207132    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:24.207194    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:24.219358    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:24.219422    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:24.234130    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:24.234194    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:24.244244    9665 logs.go:276] 0 containers: []
	W0503 15:19:24.244256    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:24.244303    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:24.254496    9665 logs.go:276] 0 containers: []
	W0503 15:19:24.254508    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:24.254516    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:24.254521    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:24.274500    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:24.274512    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:24.291606    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:24.291618    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:24.326773    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:24.326788    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:24.338845    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:24.338857    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:24.362320    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:24.362329    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:24.373572    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:24.373583    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:24.410671    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:24.410678    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:24.422309    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:24.422319    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:24.435814    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:24.435829    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:24.440721    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:24.440729    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:24.454655    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:24.454667    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:24.473249    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:24.473259    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:24.486586    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:24.486596    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:24.501596    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:24.501606    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:27.016698    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:32.018898    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:32.019268    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:32.054778    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:32.054919    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:32.076797    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:32.076921    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:32.091729    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:32.091809    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:32.104030    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:32.104098    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:32.114973    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:32.115035    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:32.126127    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:32.126190    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:32.137220    9665 logs.go:276] 0 containers: []
	W0503 15:19:32.137233    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:32.137295    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:32.147590    9665 logs.go:276] 0 containers: []
	W0503 15:19:32.147602    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:32.147612    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:32.147618    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:32.152059    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:32.152067    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:32.171941    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:32.171951    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:32.184620    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:32.184630    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:32.196191    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:32.196204    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:32.214845    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:32.214858    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:32.228493    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:32.228504    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:32.251383    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:32.251390    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:32.289030    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:32.289036    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:32.322850    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:32.322860    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:32.337341    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:32.337353    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:32.354778    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:32.354786    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:32.368820    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:32.368830    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:32.386188    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:32.386200    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:32.398047    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:32.398059    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:34.911451    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:39.913531    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:39.913638    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:39.927180    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:39.927258    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:39.938423    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:39.938493    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:39.949923    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:39.949995    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:39.960581    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:39.960659    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:39.971208    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:39.971276    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:39.983419    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:39.983484    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:39.993806    9665 logs.go:276] 0 containers: []
	W0503 15:19:39.993818    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:39.993879    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:40.004248    9665 logs.go:276] 0 containers: []
	W0503 15:19:40.004260    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:40.004267    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:40.004272    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:40.017508    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:40.017521    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:40.028954    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:40.028965    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:40.043637    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:40.043648    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:40.061011    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:40.061021    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:40.078962    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:40.078973    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:40.093573    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:40.093588    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:40.097784    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:40.097791    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:40.111024    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:40.111039    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:40.129802    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:40.129815    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:40.141959    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:40.141970    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:40.159246    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:40.159256    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:40.182902    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:40.182909    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:40.194940    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:40.194950    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:40.235333    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:40.235344    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:42.771775    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:47.774413    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:47.774808    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:47.811347    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:47.811485    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:47.833051    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:47.833165    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:47.848399    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:47.848476    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:47.861056    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:47.861131    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:47.872366    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:47.872434    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:47.883035    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:47.883104    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:47.893551    9665 logs.go:276] 0 containers: []
	W0503 15:19:47.893562    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:47.893622    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:47.905408    9665 logs.go:276] 0 containers: []
	W0503 15:19:47.905420    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:47.905428    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:47.905433    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:47.919060    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:47.919074    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:47.931227    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:47.931243    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:47.954565    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:47.954576    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:47.969760    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:47.969771    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:47.991765    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:47.991777    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:48.015468    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:48.015487    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:48.050716    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:48.050728    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:48.055122    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:48.055129    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:48.069496    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:48.069512    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:48.081979    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:48.081991    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:48.096241    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:48.096252    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:48.113208    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:48.113218    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:48.151977    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:48.151984    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:48.165470    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:48.165479    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:50.678534    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:55.681069    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:55.681279    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:55.700427    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:55.700515    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:55.714983    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:55.715057    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:55.726890    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:55.726958    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:55.739144    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:55.739214    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:55.750005    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:55.750071    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:55.760644    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:55.760713    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:55.771583    9665 logs.go:276] 0 containers: []
	W0503 15:19:55.771596    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:55.771657    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:55.783286    9665 logs.go:276] 0 containers: []
	W0503 15:19:55.783297    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:55.783304    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:55.783311    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:55.795719    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:55.795731    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:55.800470    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:55.800477    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:55.821652    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:55.821675    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:55.840468    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:55.840482    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:55.853530    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:55.853542    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:55.878416    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:55.878432    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:55.919031    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:55.919044    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:55.937578    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:55.937594    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:55.952900    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:55.952914    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:55.975858    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:55.975874    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:56.018725    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:56.018742    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:56.038084    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:56.038097    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:56.052729    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:56.052742    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:56.065740    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:56.065756    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:58.580959    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:03.583540    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:03.583605    9665 kubeadm.go:591] duration metric: took 4m3.717208333s to restartPrimaryControlPlane
	W0503 15:20:03.583663    9665 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0503 15:20:03.583688    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0503 15:20:04.519818    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 15:20:04.524874    9665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:20:04.527650    9665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:20:04.530781    9665 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 15:20:04.530787    9665 kubeadm.go:156] found existing configuration files:
	
	I0503 15:20:04.530809    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/admin.conf
	I0503 15:20:04.533405    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 15:20:04.533428    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:20:04.535969    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/kubelet.conf
	I0503 15:20:04.538886    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 15:20:04.538907    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:20:04.541540    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/controller-manager.conf
	I0503 15:20:04.544154    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 15:20:04.544178    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:20:04.547246    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/scheduler.conf
	I0503 15:20:04.550193    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 15:20:04.550211    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:20:04.552801    9665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0503 15:20:04.570537    9665 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0503 15:20:04.570637    9665 kubeadm.go:309] [preflight] Running pre-flight checks
	I0503 15:20:04.619727    9665 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0503 15:20:04.619789    9665 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0503 15:20:04.619859    9665 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0503 15:20:04.673192    9665 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0503 15:20:04.676276    9665 out.go:204]   - Generating certificates and keys ...
	I0503 15:20:04.676336    9665 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0503 15:20:04.676376    9665 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0503 15:20:04.676416    9665 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0503 15:20:04.676447    9665 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0503 15:20:04.676487    9665 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0503 15:20:04.676544    9665 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0503 15:20:04.676623    9665 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0503 15:20:04.676701    9665 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0503 15:20:04.676818    9665 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0503 15:20:04.676861    9665 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0503 15:20:04.676907    9665 kubeadm.go:309] [certs] Using the existing "sa" key
	I0503 15:20:04.676942    9665 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0503 15:20:04.815428    9665 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0503 15:20:04.903023    9665 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0503 15:20:05.001530    9665 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0503 15:20:05.115989    9665 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0503 15:20:05.146688    9665 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0503 15:20:05.146782    9665 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0503 15:20:05.146818    9665 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0503 15:20:05.225836    9665 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0503 15:20:05.228586    9665 out.go:204]   - Booting up control plane ...
	I0503 15:20:05.228634    9665 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0503 15:20:05.228671    9665 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0503 15:20:05.228711    9665 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0503 15:20:05.228758    9665 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0503 15:20:05.228851    9665 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0503 15:20:09.730618    9665 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503350 seconds
	I0503 15:20:09.730755    9665 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0503 15:20:09.736987    9665 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0503 15:20:10.246507    9665 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0503 15:20:10.246626    9665 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-916000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0503 15:20:10.754352    9665 kubeadm.go:309] [bootstrap-token] Using token: dj6sbg.a4mz0vzy2cpqg7m8
	I0503 15:20:10.757654    9665 out.go:204]   - Configuring RBAC rules ...
	I0503 15:20:10.757722    9665 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0503 15:20:10.765676    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0503 15:20:10.768015    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0503 15:20:10.769363    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0503 15:20:10.770519    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0503 15:20:10.771582    9665 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0503 15:20:10.776031    9665 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0503 15:20:10.964160    9665 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0503 15:20:11.167342    9665 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0503 15:20:11.167851    9665 kubeadm.go:309] 
	I0503 15:20:11.167883    9665 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0503 15:20:11.167906    9665 kubeadm.go:309] 
	I0503 15:20:11.167956    9665 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0503 15:20:11.167976    9665 kubeadm.go:309] 
	I0503 15:20:11.167995    9665 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0503 15:20:11.168041    9665 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0503 15:20:11.168071    9665 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0503 15:20:11.168074    9665 kubeadm.go:309] 
	I0503 15:20:11.168111    9665 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0503 15:20:11.168178    9665 kubeadm.go:309] 
	I0503 15:20:11.168235    9665 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0503 15:20:11.168241    9665 kubeadm.go:309] 
	I0503 15:20:11.168281    9665 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0503 15:20:11.168332    9665 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0503 15:20:11.168402    9665 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0503 15:20:11.168409    9665 kubeadm.go:309] 
	I0503 15:20:11.168449    9665 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0503 15:20:11.168581    9665 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0503 15:20:11.168592    9665 kubeadm.go:309] 
	I0503 15:20:11.168632    9665 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dj6sbg.a4mz0vzy2cpqg7m8 \
	I0503 15:20:11.168684    9665 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 \
	I0503 15:20:11.168702    9665 kubeadm.go:309] 	--control-plane 
	I0503 15:20:11.168704    9665 kubeadm.go:309] 
	I0503 15:20:11.168750    9665 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0503 15:20:11.168753    9665 kubeadm.go:309] 
	I0503 15:20:11.168834    9665 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dj6sbg.a4mz0vzy2cpqg7m8 \
	I0503 15:20:11.168893    9665 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 
	I0503 15:20:11.168965    9665 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0503 15:20:11.168973    9665 cni.go:84] Creating CNI manager for ""
	I0503 15:20:11.168981    9665 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:20:11.173406    9665 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0503 15:20:11.180358    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0503 15:20:11.183361    9665 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0503 15:20:11.189595    9665 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0503 15:20:11.189663    9665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 15:20:11.189712    9665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-916000 minikube.k8s.io/updated_at=2024_05_03T15_20_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a minikube.k8s.io/name=running-upgrade-916000 minikube.k8s.io/primary=true
	I0503 15:20:11.234677    9665 ops.go:34] apiserver oom_adj: -16
	I0503 15:20:11.234685    9665 kubeadm.go:1107] duration metric: took 45.060458ms to wait for elevateKubeSystemPrivileges
	W0503 15:20:11.234708    9665 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0503 15:20:11.234712    9665 kubeadm.go:393] duration metric: took 4m11.383673167s to StartCluster
	I0503 15:20:11.234721    9665 settings.go:142] acquiring lock: {Name:mkee9fdcf0e1a69d3ca7e09bf6e6cf0362ae7cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:20:11.234897    9665 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:20:11.235279    9665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:20:11.235484    9665 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:20:11.238318    9665 out.go:177] * Verifying Kubernetes components...
	I0503 15:20:11.235572    9665 config.go:182] Loaded profile config "running-upgrade-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:20:11.235562    9665 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0503 15:20:11.246322    9665 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-916000"
	I0503 15:20:11.246335    9665 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-916000"
	W0503 15:20:11.246338    9665 addons.go:243] addon storage-provisioner should already be in state true
	I0503 15:20:11.246357    9665 host.go:66] Checking if "running-upgrade-916000" exists ...
	I0503 15:20:11.246378    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:20:11.246388    9665 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-916000"
	I0503 15:20:11.246401    9665 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-916000"
	I0503 15:20:11.247378    9665 kapi.go:59] client config for running-upgrade-916000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eefcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:20:11.247496    9665 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-916000"
	W0503 15:20:11.247501    9665 addons.go:243] addon default-storageclass should already be in state true
	I0503 15:20:11.247507    9665 host.go:66] Checking if "running-upgrade-916000" exists ...
	I0503 15:20:11.252301    9665 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:20:11.256356    9665 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:20:11.256362    9665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0503 15:20:11.256368    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:20:11.256962    9665 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0503 15:20:11.256967    9665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0503 15:20:11.256971    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:20:11.331726    9665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:20:11.336751    9665 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:20:11.336798    9665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:20:11.340960    9665 api_server.go:72] duration metric: took 105.467333ms to wait for apiserver process to appear ...
	I0503 15:20:11.340968    9665 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:20:11.340976    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:11.364440    9665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0503 15:20:11.364815    9665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:20:16.343077    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:16.343192    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:21.343783    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:21.343803    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:26.344546    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:26.344577    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:31.345137    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:31.345162    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:36.345940    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:36.345960    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:41.346845    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:41.346905    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0503 15:20:41.716635    9665 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0503 15:20:41.721940    9665 out.go:177] * Enabled addons: storage-provisioner
	I0503 15:20:41.733810    9665 addons.go:505] duration metric: took 30.498989084s for enable addons: enabled=[storage-provisioner]
	I0503 15:20:46.348222    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:46.348248    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:51.349891    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:51.349939    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:56.352148    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:56.352188    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:01.354304    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:01.354329    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:06.354703    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:06.354733    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:11.356829    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:11.357023    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:11.397459    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:11.397545    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:11.410662    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:11.410738    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:11.426690    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:11.426773    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:11.437169    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:11.437228    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:11.448152    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:11.448227    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:11.458886    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:11.458956    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:11.469005    9665 logs.go:276] 0 containers: []
	W0503 15:21:11.469016    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:11.469076    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:11.479446    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:11.479461    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:11.479469    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:11.504715    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:11.504725    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:11.528989    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:11.528996    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:11.533232    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:11.533241    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:11.568341    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:11.568353    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:11.583029    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:11.583041    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:11.595725    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:11.595735    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:11.610202    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:11.610215    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:11.622598    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:11.622611    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:11.634085    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:11.634096    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:11.645262    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:11.645272    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:11.681649    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:11.681661    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:11.695715    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:11.695727    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:14.210124    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:19.212350    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:19.212572    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:19.228431    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:19.228515    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:19.242259    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:19.242333    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:19.253340    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:19.253408    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:19.263803    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:19.263874    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:19.274556    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:19.274618    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:19.285000    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:19.285060    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:19.295039    9665 logs.go:276] 0 containers: []
	W0503 15:21:19.295051    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:19.295099    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:19.305577    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:19.305591    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:19.305597    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:19.344592    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:19.344604    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:19.359179    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:19.359192    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:19.370565    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:19.370579    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:19.386358    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:19.386370    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:19.398013    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:19.398023    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:19.434216    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:19.434229    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:19.439068    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:19.439074    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:19.451008    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:19.451022    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:19.468131    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:19.468141    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:19.491736    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:19.491745    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:19.503119    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:19.503131    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:19.516877    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:19.516887    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:22.030764    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:27.033168    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:27.033543    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:27.071785    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:27.071961    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:27.090129    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:27.090222    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:27.103498    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:27.103571    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:27.115263    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:27.115338    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:27.125899    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:27.125973    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:27.136364    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:27.136438    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:27.150511    9665 logs.go:276] 0 containers: []
	W0503 15:21:27.150522    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:27.150574    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:27.160779    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:27.160792    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:27.160797    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:27.184965    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:27.184972    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:27.189458    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:27.189465    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:27.203747    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:27.203760    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:27.217179    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:27.217193    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:27.228568    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:27.228582    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:27.246313    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:27.246324    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:27.257753    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:27.257767    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:27.294083    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:27.294092    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:27.329252    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:27.329264    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:27.347016    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:27.347028    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:27.361550    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:27.361561    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:27.373184    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:27.373197    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:29.885944    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:34.888324    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:34.888532    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:34.910874    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:34.910978    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:34.923992    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:34.924067    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:34.935732    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:34.935796    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:34.946416    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:34.946480    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:34.956381    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:34.956452    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:34.966418    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:34.966479    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:34.976713    9665 logs.go:276] 0 containers: []
	W0503 15:21:34.976724    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:34.976789    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:34.987336    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:34.987350    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:34.987355    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:34.999244    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:34.999254    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:35.017716    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:35.017728    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:35.041702    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:35.041714    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:35.053340    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:35.053351    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:35.088491    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:35.088503    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:35.103406    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:35.103420    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:35.117263    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:35.117274    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:35.128779    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:35.128793    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:35.146003    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:35.146014    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:35.182749    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:35.182759    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:35.187297    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:35.187306    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:35.201195    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:35.201208    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:37.717732    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:42.719981    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:42.720124    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:42.732977    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:42.733043    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:42.744042    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:42.744111    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:42.757264    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:42.757333    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:42.767554    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:42.767621    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:42.778183    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:42.778258    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:42.789116    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:42.789196    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:42.799331    9665 logs.go:276] 0 containers: []
	W0503 15:21:42.799344    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:42.799403    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:42.809209    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:42.809222    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:42.809228    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:42.820872    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:42.820885    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:42.838210    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:42.838220    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:42.850361    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:42.850372    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:42.855329    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:42.855337    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:42.869814    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:42.869825    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:42.884140    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:42.884151    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:42.902856    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:42.902869    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:42.914559    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:42.914568    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:42.926093    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:42.926104    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:42.941398    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:42.941409    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:42.964727    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:42.964734    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:42.999394    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:42.999401    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:45.541341    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:50.544042    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:50.544432    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:50.572851    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:50.572975    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:50.591152    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:50.591231    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:50.604567    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:50.604636    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:50.615632    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:50.615713    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:50.626494    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:50.626561    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:50.636911    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:50.636980    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:50.648581    9665 logs.go:276] 0 containers: []
	W0503 15:21:50.648593    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:50.648655    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:50.659160    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:50.659174    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:50.659179    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:50.673736    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:50.673750    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:50.685472    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:50.685486    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:50.710585    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:50.710593    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:50.722438    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:50.722449    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:50.762853    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:50.762868    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:50.776925    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:50.776939    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:50.788756    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:50.788767    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:50.800606    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:50.800620    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:50.812069    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:50.812079    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:50.847978    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:50.847985    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:50.852358    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:50.852364    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:50.865994    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:50.866008    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:53.385260    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:58.387509    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:58.387763    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:58.413275    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:58.413392    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:58.430175    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:58.430264    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:58.443702    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:58.443770    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:58.454989    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:58.455059    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:58.465453    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:58.465519    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:58.476766    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:58.476830    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:58.487715    9665 logs.go:276] 0 containers: []
	W0503 15:21:58.487727    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:58.487781    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:58.498303    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:58.498318    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:58.498324    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:58.503356    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:58.503366    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:58.517566    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:58.517576    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:58.533624    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:58.533637    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:58.545400    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:58.545409    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:58.563437    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:58.563448    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:58.575261    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:58.575271    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:58.586522    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:58.586531    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:58.622907    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:58.622916    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:58.665754    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:58.665766    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:58.680477    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:58.680492    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:58.692530    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:58.692545    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:58.707372    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:58.707385    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:01.233540    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:06.234120    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:06.234264    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:06.246377    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:06.246448    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:06.256589    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:06.256661    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:06.266831    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:22:06.266904    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:06.277630    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:06.277698    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:06.289202    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:06.289268    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:06.300211    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:06.300270    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:06.310966    9665 logs.go:276] 0 containers: []
	W0503 15:22:06.310976    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:06.311028    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:06.324637    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:06.324654    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:06.324659    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:06.362073    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:06.362087    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:06.377591    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:06.377603    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:06.391165    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:06.391178    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:06.407110    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:06.407125    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:06.420629    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:06.420644    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:06.434821    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:06.434835    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:06.446141    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:06.446155    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:06.451194    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:06.451202    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:06.463096    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:06.463109    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:06.475102    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:06.475113    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:06.493079    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:06.493089    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:06.517997    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:06.518007    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:09.054926    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:14.057121    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:14.057238    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:14.072009    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:14.072075    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:14.082823    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:14.082893    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:14.093434    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:22:14.093500    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:14.103722    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:14.103792    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:14.117418    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:14.117489    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:14.127529    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:14.127590    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:14.137423    9665 logs.go:276] 0 containers: []
	W0503 15:22:14.137434    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:14.137489    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:14.147627    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:14.147642    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:14.147647    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:14.158981    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:14.158994    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:14.195732    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:14.195743    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:14.229980    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:14.229991    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:14.242176    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:14.242188    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:14.253674    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:14.253686    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:14.268394    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:14.268409    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:14.280504    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:14.280519    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:14.302235    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:14.302245    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:14.325812    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:14.325819    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:14.330430    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:14.330436    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:14.344764    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:14.344775    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:14.366565    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:14.366574    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:16.880979    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:21.883241    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:21.883415    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:21.903933    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:21.904043    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:21.918579    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:21.918656    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:21.935900    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:22:21.935970    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:21.951150    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:21.951224    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:21.964038    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:21.964109    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:21.973988    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:21.974055    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:21.983810    9665 logs.go:276] 0 containers: []
	W0503 15:22:21.983820    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:21.983876    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:21.995263    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:21.995278    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:21.995283    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:22.035033    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:22.035045    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:22.049804    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:22.049814    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:22.063918    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:22.063934    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:22.075926    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:22.075940    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:22.088288    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:22.088300    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:22.106000    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:22.106014    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:22.117550    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:22.117559    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:22.140912    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:22.140923    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:22.175942    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:22.175952    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:22.180090    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:22.180098    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:22.191870    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:22.191881    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:22.203463    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:22.203476    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:24.719327    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:29.721569    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:29.721815    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:29.748830    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:29.748951    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:29.768964    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:29.769035    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:29.782027    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:29.782108    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:29.793918    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:29.793984    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:29.804198    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:29.804264    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:29.814730    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:29.814796    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:29.824434    9665 logs.go:276] 0 containers: []
	W0503 15:22:29.824456    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:29.824505    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:29.834900    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:29.834915    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:29.834921    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:29.870499    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:29.870509    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:29.882808    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:29.882819    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:29.897257    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:29.897269    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:29.908976    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:29.908988    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:29.923296    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:29.923307    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:29.934756    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:29.934769    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:29.947196    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:29.947206    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:29.958580    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:29.958593    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:29.995175    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:29.995185    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:30.009609    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:30.009622    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:30.022099    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:30.022111    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:30.046815    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:30.046824    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:30.051402    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:30.051409    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:30.063995    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:30.064008    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:32.583877    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:37.586182    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:37.586390    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:37.608283    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:37.608370    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:37.622321    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:37.622392    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:37.634168    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:37.634235    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:37.644818    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:37.644879    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:37.659229    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:37.659293    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:37.669419    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:37.669485    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:37.680088    9665 logs.go:276] 0 containers: []
	W0503 15:22:37.680099    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:37.680158    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:37.691319    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:37.691335    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:37.691340    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:37.695782    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:37.695791    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:37.709817    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:37.709831    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:37.721920    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:37.721934    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:37.733356    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:37.733366    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:37.745828    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:37.745839    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:37.783877    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:37.783887    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:37.822644    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:37.822657    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:37.836988    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:37.836999    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:37.848463    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:37.848476    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:37.862876    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:37.862889    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:37.888143    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:37.888153    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:37.902537    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:37.902548    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:37.914245    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:37.914256    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:37.925650    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:37.925663    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:40.448755    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:45.451415    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:45.451899    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:45.488677    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:45.488813    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:45.518293    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:45.518392    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:45.531951    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:45.532031    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:45.543047    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:45.543116    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:45.553920    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:45.553985    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:45.565947    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:45.566015    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:45.576290    9665 logs.go:276] 0 containers: []
	W0503 15:22:45.576303    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:45.576365    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:45.591261    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:45.591278    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:45.591283    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:45.603245    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:45.603256    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:45.614585    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:45.614596    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:45.626020    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:45.626030    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:45.661369    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:45.661380    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:45.683022    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:45.683032    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:45.706247    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:45.706254    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:45.717682    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:45.717697    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:45.732691    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:45.732701    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:45.769363    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:45.769375    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:45.784357    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:45.784369    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:45.803640    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:45.803651    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:45.815077    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:45.815090    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:45.826255    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:45.826265    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:45.838168    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:45.838181    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:48.343431    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:53.344745    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:53.345115    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:53.378200    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:53.378327    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:53.397880    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:53.397968    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:53.412005    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:53.412082    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:53.423856    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:53.423917    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:53.434453    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:53.434524    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:53.448594    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:53.448657    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:53.458855    9665 logs.go:276] 0 containers: []
	W0503 15:22:53.458868    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:53.458925    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:53.469156    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:53.469175    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:53.469181    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:53.504298    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:53.504306    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:53.518206    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:53.518217    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:53.529835    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:53.529846    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:53.534575    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:53.534583    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:53.546267    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:53.546280    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:53.558757    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:53.558772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:53.570615    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:53.570627    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:53.588178    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:53.588190    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:53.602024    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:53.602035    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:53.638897    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:53.638911    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:53.653316    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:53.653329    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:53.664720    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:53.664731    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:53.685949    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:53.685959    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:53.710276    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:53.710283    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:56.223877    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:01.226029    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:01.226143    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:01.244730    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:01.244805    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:01.257225    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:01.257293    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:01.268608    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:01.268681    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:01.280089    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:01.280151    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:01.291239    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:01.291309    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:01.301956    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:01.302035    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:01.312690    9665 logs.go:276] 0 containers: []
	W0503 15:23:01.312702    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:01.312750    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:01.324611    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:01.324630    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:01.324635    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:01.339406    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:01.339419    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:01.351460    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:01.351474    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:01.370759    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:01.370772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:01.382435    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:01.382448    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:01.397700    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:01.397712    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:01.416046    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:01.416056    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:01.439535    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:01.439546    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:01.473052    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:01.473066    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:01.484990    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:01.485001    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:01.496612    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:01.496622    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:01.531660    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:01.531671    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:01.535742    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:01.535751    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:01.549977    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:01.549990    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:01.562525    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:01.562538    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:04.075459    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:09.078013    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:09.078226    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:09.097264    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:09.097359    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:09.112064    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:09.112139    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:09.124249    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:09.124316    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:09.134533    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:09.134612    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:09.145000    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:09.145071    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:09.160073    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:09.160141    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:09.170780    9665 logs.go:276] 0 containers: []
	W0503 15:23:09.170790    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:09.170845    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:09.181089    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:09.181104    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:09.181109    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:09.193869    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:09.193882    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:09.205519    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:09.205530    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:09.220439    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:09.220451    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:09.238477    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:09.238488    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:09.261631    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:09.261642    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:09.295849    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:09.295861    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:09.332501    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:09.332510    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:09.348730    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:09.348744    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:09.363292    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:09.363305    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:09.374548    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:09.374558    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:09.411717    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:09.411730    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:09.431006    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:09.431020    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:09.445988    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:09.445997    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:09.450697    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:09.450707    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:11.964354    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:16.966861    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:16.967098    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:16.990946    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:16.991048    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:17.006262    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:17.006342    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:17.019096    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:17.019165    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:17.032684    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:17.032754    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:17.043494    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:17.043559    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:17.054407    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:17.054473    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:17.065253    9665 logs.go:276] 0 containers: []
	W0503 15:23:17.065266    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:17.065325    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:17.076081    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:17.076098    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:17.076104    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:17.098439    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:17.098449    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:17.112051    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:17.112064    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:17.124076    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:17.124089    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:17.135723    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:17.135736    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:17.147742    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:17.147755    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:17.159301    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:17.159315    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:17.164129    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:17.164139    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:17.199727    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:17.199742    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:17.222895    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:17.222904    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:17.234106    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:17.234115    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:17.247783    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:17.247796    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:17.262014    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:17.262027    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:17.296812    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:17.296824    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:17.314461    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:17.314471    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:19.827949    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:24.830179    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:24.830326    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:24.842909    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:24.842981    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:24.853351    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:24.853423    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:24.863602    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:24.863672    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:24.877884    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:24.877949    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:24.888192    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:24.888260    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:24.898911    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:24.898979    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:24.909162    9665 logs.go:276] 0 containers: []
	W0503 15:23:24.909173    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:24.909227    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:24.919342    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:24.919358    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:24.919363    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:24.931236    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:24.931247    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:24.942851    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:24.942861    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:24.958889    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:24.958900    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:24.995716    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:24.995727    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:25.013394    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:25.013405    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:25.049049    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:25.049070    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:25.061118    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:25.061130    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:25.079538    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:25.079551    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:25.101899    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:25.101912    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:25.119488    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:25.119498    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:25.131690    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:25.131704    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:25.145962    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:25.145974    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:25.158410    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:25.158423    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:25.162909    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:25.162917    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:27.687584    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:32.690121    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:32.690302    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:32.706315    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:32.706401    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:32.718991    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:32.719061    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:32.729899    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:32.729982    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:32.740745    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:32.740821    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:32.753795    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:32.753874    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:32.764898    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:32.764968    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:32.774946    9665 logs.go:276] 0 containers: []
	W0503 15:23:32.774960    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:32.775017    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:32.784745    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:32.784763    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:32.784768    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:32.789372    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:32.789381    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:32.801316    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:32.801326    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:32.812834    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:32.812847    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:32.830272    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:32.830284    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:32.854681    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:32.854689    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:32.866046    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:32.866059    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:32.877242    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:32.877255    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:32.891262    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:32.891274    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:32.906042    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:32.906050    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:32.917885    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:32.917896    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:32.930496    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:32.930507    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:32.966760    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:32.966768    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:33.001825    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:33.001836    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:33.020466    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:33.020478    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:35.540707    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:40.542793    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:40.542882    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:40.557745    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:40.557817    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:40.568495    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:40.568566    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:40.579212    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:40.579274    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:40.597205    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:40.597275    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:40.608077    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:40.608146    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:40.619017    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:40.619083    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:40.629516    9665 logs.go:276] 0 containers: []
	W0503 15:23:40.629529    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:40.629585    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:40.644929    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:40.644944    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:40.644949    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:40.659545    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:40.659559    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:40.673544    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:40.673554    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:40.685443    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:40.685459    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:40.697145    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:40.697156    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:40.733817    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:40.733827    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:40.738313    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:40.738321    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:40.759406    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:40.759417    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:40.784498    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:40.784507    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:40.820373    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:40.820385    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:40.841253    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:40.841264    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:40.853140    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:40.853152    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:40.871510    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:40.871524    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:40.885862    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:40.885873    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:40.897915    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:40.897929    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:43.411462    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:48.413636    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:48.413771    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:48.426014    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:48.426085    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:48.437453    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:48.437528    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:48.447700    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:48.447766    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:48.458673    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:48.458745    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:48.469586    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:48.469650    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:48.480643    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:48.480714    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:48.490799    9665 logs.go:276] 0 containers: []
	W0503 15:23:48.490810    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:48.490862    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:48.500810    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:48.500828    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:48.500834    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:48.518781    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:48.518791    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:48.530374    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:48.530385    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:48.541564    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:48.541575    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:48.565287    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:48.565295    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:48.603152    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:48.603170    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:48.638453    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:48.638468    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:48.650823    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:48.650834    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:48.664890    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:48.664901    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:48.683054    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:48.683068    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:48.696497    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:48.696508    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:48.708318    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:48.708330    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:48.719772    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:48.719784    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:48.732773    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:48.732785    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:48.737604    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:48.737613    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:51.255858    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:56.258065    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:56.258240    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:56.294224    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:56.294310    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:56.306545    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:56.306614    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:56.316830    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:56.316895    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:56.327344    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:56.327409    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:56.337892    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:56.337959    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:56.348021    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:56.348087    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:56.358082    9665 logs.go:276] 0 containers: []
	W0503 15:23:56.358092    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:56.358146    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:56.368425    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:56.368442    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:56.368447    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:56.404003    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:56.404013    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:56.415838    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:56.415850    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:56.450915    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:56.450922    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:56.461765    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:56.461777    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:56.476368    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:56.476377    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:56.490633    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:56.490643    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:56.502272    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:56.502280    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:56.528204    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:56.528214    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:56.532796    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:56.532805    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:56.547040    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:56.547049    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:56.558498    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:56.558508    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:56.570056    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:56.570065    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:56.582059    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:56.582068    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:56.606021    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:56.606027    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:59.119649    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:04.121755    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:04.121959    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:24:04.140375    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:24:04.140473    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:24:04.154368    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:24:04.154445    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:24:04.166785    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:24:04.166865    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:24:04.177317    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:24:04.177385    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:24:04.188179    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:24:04.188241    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:24:04.199117    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:24:04.199184    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:24:04.213431    9665 logs.go:276] 0 containers: []
	W0503 15:24:04.213441    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:24:04.213497    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:24:04.223916    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:24:04.223932    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:24:04.223938    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:24:04.235737    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:24:04.235748    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:24:04.253470    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:24:04.253481    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:24:04.277647    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:24:04.277660    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:24:04.289847    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:24:04.289859    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:24:04.294181    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:24:04.294189    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:24:04.328501    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:24:04.328515    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:24:04.342816    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:24:04.342829    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:24:04.357025    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:24:04.357038    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:24:04.369013    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:24:04.369024    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:24:04.380576    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:24:04.380587    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:24:04.395675    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:24:04.395686    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:24:04.411854    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:24:04.411866    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:24:04.424171    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:24:04.424182    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:24:04.461682    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:24:04.461693    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:24:06.975735    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:11.977822    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:11.982156    9665 out.go:177] 
	W0503 15:24:11.986102    9665 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0503 15:24:11.986111    9665 out.go:239] * 
	* 
	W0503 15:24:11.986655    9665 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:24:11.998118    9665 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-916000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-05-03 15:24:12.077554 -0700 PDT m=+1286.332820709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-916000 -n running-upgrade-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-916000 -n running-upgrade-916000: exit status 2 (15.675494s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-916000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-743000          | force-systemd-flag-743000 | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-955000              | force-systemd-env-955000  | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-955000           | force-systemd-env-955000  | jenkins | v1.33.0 | 03 May 24 15:14 PDT | 03 May 24 15:14 PDT |
	| start   | -p docker-flags-965000                | docker-flags-965000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-743000             | force-systemd-flag-743000 | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-743000          | force-systemd-flag-743000 | jenkins | v1.33.0 | 03 May 24 15:14 PDT | 03 May 24 15:14 PDT |
	| start   | -p cert-expiration-807000             | cert-expiration-807000    | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-965000 ssh               | docker-flags-965000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-965000 ssh               | docker-flags-965000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-965000                | docker-flags-965000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT | 03 May 24 15:14 PDT |
	| start   | -p cert-options-277000                | cert-options-277000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-277000 ssh               | cert-options-277000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-277000 -- sudo        | cert-options-277000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-277000                | cert-options-277000       | jenkins | v1.33.0 | 03 May 24 15:14 PDT | 03 May 24 15:14 PDT |
	| start   | -p running-upgrade-916000             | minikube                  | jenkins | v1.26.0 | 03 May 24 15:14 PDT | 03 May 24 15:15 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-916000             | running-upgrade-916000    | jenkins | v1.33.0 | 03 May 24 15:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-807000             | cert-expiration-807000    | jenkins | v1.33.0 | 03 May 24 15:17 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-807000             | cert-expiration-807000    | jenkins | v1.33.0 | 03 May 24 15:17 PDT | 03 May 24 15:17 PDT |
	| start   | -p kubernetes-upgrade-999000          | kubernetes-upgrade-999000 | jenkins | v1.33.0 | 03 May 24 15:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-999000          | kubernetes-upgrade-999000 | jenkins | v1.33.0 | 03 May 24 15:18 PDT | 03 May 24 15:18 PDT |
	| start   | -p kubernetes-upgrade-999000          | kubernetes-upgrade-999000 | jenkins | v1.33.0 | 03 May 24 15:18 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-999000          | kubernetes-upgrade-999000 | jenkins | v1.33.0 | 03 May 24 15:18 PDT | 03 May 24 15:18 PDT |
	| start   | -p stopped-upgrade-139000             | minikube                  | jenkins | v1.26.0 | 03 May 24 15:18 PDT | 03 May 24 15:18 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-139000 stop           | minikube                  | jenkins | v1.26.0 | 03 May 24 15:18 PDT | 03 May 24 15:19 PDT |
	| start   | -p stopped-upgrade-139000             | stopped-upgrade-139000    | jenkins | v1.33.0 | 03 May 24 15:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 15:19:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 15:19:06.195579    9866 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:19:06.195708    9866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:19:06.195712    9866 out.go:304] Setting ErrFile to fd 2...
	I0503 15:19:06.195714    9866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:19:06.195857    9866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:19:06.197091    9866 out.go:298] Setting JSON to false
	I0503 15:19:06.215587    9866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4717,"bootTime":1714770029,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:19:06.215656    9866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:19:06.228425    9866 out.go:177] * [stopped-upgrade-139000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:19:06.236859    9866 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:19:06.241863    9866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:19:06.236900    9866 notify.go:220] Checking for updates...
	I0503 15:19:06.247719    9866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:19:06.250822    9866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:19:06.253845    9866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:19:06.256843    9866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:19:06.260193    9866 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:19:06.263854    9866 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0503 15:19:06.266821    9866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:19:06.270805    9866 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:19:06.276871    9866 start.go:297] selected driver: qemu2
	I0503 15:19:06.276880    9866 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:19:06.276967    9866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:19:06.279673    9866 cni.go:84] Creating CNI manager for ""
	I0503 15:19:06.279693    9866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:19:06.279724    9866 start.go:340] cluster config:
	{Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:19:06.279782    9866 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:19:06.285847    9866 out.go:177] * Starting "stopped-upgrade-139000" primary control-plane node in "stopped-upgrade-139000" cluster
	I0503 15:19:06.289760    9866 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0503 15:19:06.289778    9866 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0503 15:19:06.289790    9866 cache.go:56] Caching tarball of preloaded images
	I0503 15:19:06.289849    9866 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:19:06.289854    9866 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0503 15:19:06.289915    9866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/config.json ...
	I0503 15:19:06.290296    9866 start.go:360] acquireMachinesLock for stopped-upgrade-139000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:19:06.290344    9866 start.go:364] duration metric: took 42.667µs to acquireMachinesLock for "stopped-upgrade-139000"
	I0503 15:19:06.290355    9866 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:19:06.290360    9866 fix.go:54] fixHost starting: 
	I0503 15:19:06.290484    9866 fix.go:112] recreateIfNeeded on stopped-upgrade-139000: state=Stopped err=<nil>
	W0503 15:19:06.290495    9866 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:19:06.299231    9866 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-139000" ...
	I0503 15:19:08.356284    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:08.356772    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:08.396751    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:08.396892    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:08.419282    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:08.419393    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:08.434744    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:08.434816    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:08.447991    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:08.448061    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:08.458631    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:08.458691    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:08.469026    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:08.469097    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:08.479431    9665 logs.go:276] 0 containers: []
	W0503 15:19:08.479441    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:08.479498    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:08.489524    9665 logs.go:276] 0 containers: []
	W0503 15:19:08.489535    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:08.489543    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:08.489548    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:08.503269    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:08.503279    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:08.542319    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:08.542326    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:08.546440    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:08.546448    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:08.561055    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:08.561067    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:08.572857    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:08.572869    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:08.586977    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:08.586987    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:08.610725    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:08.610731    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:08.622453    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:08.622464    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:08.658106    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:08.658115    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:08.683271    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:08.683283    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:08.695229    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:08.695244    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:08.706446    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:08.706460    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:08.723461    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:08.723474    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:08.737157    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:08.737183    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:06.303958    9866 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51368-:22,hostfwd=tcp::51369-:2376,hostname=stopped-upgrade-139000 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/disk.qcow2
	I0503 15:19:06.350258    9866 main.go:141] libmachine: STDOUT: 
	I0503 15:19:06.350298    9866 main.go:141] libmachine: STDERR: 
	I0503 15:19:06.350303    9866 main.go:141] libmachine: Waiting for VM to start (ssh -p 51368 docker@127.0.0.1)...
	I0503 15:19:11.256734    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:16.259010    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:16.259117    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:16.270412    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:16.270484    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:16.281265    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:16.281348    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:16.292256    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:16.292322    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:16.303226    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:16.303287    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:16.313715    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:16.313800    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:16.324228    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:16.324293    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:16.334525    9665 logs.go:276] 0 containers: []
	W0503 15:19:16.334536    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:16.334584    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:16.345488    9665 logs.go:276] 0 containers: []
	W0503 15:19:16.345497    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:16.345504    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:16.345512    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:16.363074    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:16.363085    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:16.374732    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:16.374742    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:16.386969    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:16.386979    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:16.404472    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:16.404482    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:16.419342    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:16.419353    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:16.432530    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:16.432540    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:16.471986    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:16.471994    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:16.498809    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:16.498819    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:16.517804    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:16.517814    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:16.529760    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:16.529772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:16.544593    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:16.544603    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:16.549033    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:16.549040    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:16.587149    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:16.587161    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:16.610545    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:16.610553    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:19.125908    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:24.128191    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:24.128558    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:24.161825    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:24.161950    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:24.180772    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:24.180876    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:24.195053    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:24.195124    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:24.207132    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:24.207194    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:24.219358    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:24.219422    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:24.234130    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:24.234194    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:24.244244    9665 logs.go:276] 0 containers: []
	W0503 15:19:24.244256    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:24.244303    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:24.254496    9665 logs.go:276] 0 containers: []
	W0503 15:19:24.254508    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:24.254516    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:24.254521    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:24.274500    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:24.274512    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:24.291606    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:24.291618    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:24.326773    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:24.326788    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:24.338845    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:24.338857    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:24.362320    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:24.362329    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:24.373572    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:24.373583    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:24.410671    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:24.410678    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:24.422309    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:24.422319    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:24.435814    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:24.435829    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:24.440721    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:24.440729    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:24.454655    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:24.454667    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:24.473249    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:24.473259    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:24.486586    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:24.486596    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:24.501596    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:24.501606    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:27.016698    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:26.494467    9866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/config.json ...
	I0503 15:19:26.495136    9866 machine.go:94] provisionDockerMachine start ...
	I0503 15:19:26.495341    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.495788    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.495802    9866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0503 15:19:26.573495    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0503 15:19:26.573533    9866 buildroot.go:166] provisioning hostname "stopped-upgrade-139000"
	I0503 15:19:26.573647    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.573883    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.573897    9866 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-139000 && echo "stopped-upgrade-139000" | sudo tee /etc/hostname
	I0503 15:19:26.642200    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-139000
	
	I0503 15:19:26.642265    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.642421    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.642435    9866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-139000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-139000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-139000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0503 15:19:26.698439    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0503 15:19:26.698452    9866 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18793-7269/.minikube CaCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18793-7269/.minikube}
	I0503 15:19:26.698461    9866 buildroot.go:174] setting up certificates
	I0503 15:19:26.698479    9866 provision.go:84] configureAuth start
	I0503 15:19:26.698484    9866 provision.go:143] copyHostCerts
	I0503 15:19:26.698555    9866 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem, removing ...
	I0503 15:19:26.698561    9866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem
	I0503 15:19:26.698654    9866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem (1675 bytes)
	I0503 15:19:26.698816    9866 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem, removing ...
	I0503 15:19:26.698819    9866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem
	I0503 15:19:26.698862    9866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem (1078 bytes)
	I0503 15:19:26.698953    9866 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem, removing ...
	I0503 15:19:26.698956    9866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem
	I0503 15:19:26.698994    9866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem (1123 bytes)
	I0503 15:19:26.699077    9866 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-139000 san=[127.0.0.1 localhost minikube stopped-upgrade-139000]
	I0503 15:19:26.792225    9866 provision.go:177] copyRemoteCerts
	I0503 15:19:26.792258    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0503 15:19:26.792266    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:19:26.821044    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0503 15:19:26.827925    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0503 15:19:26.834381    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0503 15:19:26.841396    9866 provision.go:87] duration metric: took 142.912167ms to configureAuth
	I0503 15:19:26.841408    9866 buildroot.go:189] setting minikube options for container-runtime
	I0503 15:19:26.841502    9866 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:19:26.841537    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.841628    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.841636    9866 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0503 15:19:26.895976    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0503 15:19:26.895988    9866 buildroot.go:70] root file system type: tmpfs
	I0503 15:19:26.896043    9866 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0503 15:19:26.896086    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.896193    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.896227    9866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0503 15:19:26.953336    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0503 15:19:26.953395    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.953494    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.953502    9866 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0503 15:19:27.302340    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0503 15:19:27.302353    9866 machine.go:97] duration metric: took 807.225666ms to provisionDockerMachine
	I0503 15:19:27.302360    9866 start.go:293] postStartSetup for "stopped-upgrade-139000" (driver="qemu2")
	I0503 15:19:27.302366    9866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0503 15:19:27.302413    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0503 15:19:27.302421    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:19:27.330568    9866 ssh_runner.go:195] Run: cat /etc/os-release
	I0503 15:19:27.331888    9866 info.go:137] Remote host: Buildroot 2021.02.12
	I0503 15:19:27.331895    9866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18793-7269/.minikube/addons for local assets ...
	I0503 15:19:27.331970    9866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18793-7269/.minikube/files for local assets ...
	I0503 15:19:27.332063    9866 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem -> 77682.pem in /etc/ssl/certs
	I0503 15:19:27.332152    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0503 15:19:27.334587    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem --> /etc/ssl/certs/77682.pem (1708 bytes)
	I0503 15:19:27.341393    9866 start.go:296] duration metric: took 39.029291ms for postStartSetup
	I0503 15:19:27.341407    9866 fix.go:56] duration metric: took 21.051530375s for fixHost
	I0503 15:19:27.341448    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:27.341554    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:27.341563    9866 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0503 15:19:27.392243    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714774767.012595962
	
	I0503 15:19:27.392251    9866 fix.go:216] guest clock: 1714774767.012595962
	I0503 15:19:27.392255    9866 fix.go:229] Guest: 2024-05-03 15:19:27.012595962 -0700 PDT Remote: 2024-05-03 15:19:27.34141 -0700 PDT m=+21.171203459 (delta=-328.814038ms)
	I0503 15:19:27.392268    9866 fix.go:200] guest clock delta is within tolerance: -328.814038ms
	I0503 15:19:27.392271    9866 start.go:83] releasing machines lock for "stopped-upgrade-139000", held for 21.102405333s
	I0503 15:19:27.392332    9866 ssh_runner.go:195] Run: cat /version.json
	I0503 15:19:27.392334    9866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0503 15:19:27.392340    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:19:27.392351    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	W0503 15:19:27.392928    9866 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51368: connect: connection refused
	I0503 15:19:27.392950    9866 retry.go:31] will retry after 326.036256ms: dial tcp [::1]:51368: connect: connection refused
	W0503 15:19:27.418408    9866 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0503 15:19:27.418458    9866 ssh_runner.go:195] Run: systemctl --version
	I0503 15:19:27.420354    9866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0503 15:19:27.421988    9866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0503 15:19:27.422026    9866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0503 15:19:27.425223    9866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0503 15:19:27.429521    9866 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0503 15:19:27.429528    9866 start.go:494] detecting cgroup driver to use...
	I0503 15:19:27.429605    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 15:19:27.435983    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0503 15:19:27.438782    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0503 15:19:27.441797    9866 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0503 15:19:27.441822    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0503 15:19:27.445202    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 15:19:27.448000    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0503 15:19:27.450676    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 15:19:27.453877    9866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0503 15:19:27.457063    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0503 15:19:27.460229    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0503 15:19:27.462807    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0503 15:19:27.465943    9866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0503 15:19:27.468930    9866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0503 15:19:27.471557    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:27.556410    9866 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0503 15:19:27.563798    9866 start.go:494] detecting cgroup driver to use...
	I0503 15:19:27.563866    9866 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0503 15:19:27.574981    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0503 15:19:27.585163    9866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0503 15:19:27.596455    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0503 15:19:27.603797    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0503 15:19:27.608493    9866 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0503 15:19:27.649801    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0503 15:19:27.654882    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 15:19:27.660192    9866 ssh_runner.go:195] Run: which cri-dockerd
	I0503 15:19:27.661417    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0503 15:19:27.664459    9866 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0503 15:19:27.669635    9866 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0503 15:19:27.753729    9866 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0503 15:19:27.831728    9866 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0503 15:19:27.831787    9866 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0503 15:19:27.838452    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:27.915250    9866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0503 15:19:29.074809    9866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159569458s)
	I0503 15:19:29.074868    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0503 15:19:29.079819    9866 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0503 15:19:29.085821    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0503 15:19:29.090106    9866 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0503 15:19:29.171625    9866 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0503 15:19:29.257119    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:29.335212    9866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0503 15:19:29.340992    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0503 15:19:29.345174    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:29.421930    9866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0503 15:19:29.460753    9866 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0503 15:19:29.460846    9866 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0503 15:19:29.463075    9866 start.go:562] Will wait 60s for crictl version
	I0503 15:19:29.463121    9866 ssh_runner.go:195] Run: which crictl
	I0503 15:19:29.464494    9866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0503 15:19:29.478436    9866 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0503 15:19:29.478508    9866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0503 15:19:29.494211    9866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0503 15:19:29.515988    9866 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0503 15:19:29.516107    9866 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0503 15:19:29.517381    9866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0503 15:19:29.522485    9866 kubeadm.go:877] updating cluster {Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0503 15:19:29.522531    9866 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0503 15:19:29.522569    9866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0503 15:19:29.533095    9866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0503 15:19:29.533104    9866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0503 15:19:29.533155    9866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0503 15:19:29.536649    9866 ssh_runner.go:195] Run: which lz4
	I0503 15:19:29.537924    9866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0503 15:19:29.539154    9866 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0503 15:19:29.539165    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0503 15:19:30.249371    9866 docker.go:649] duration metric: took 711.491167ms to copy over tarball
	I0503 15:19:30.249430    9866 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0503 15:19:32.018898    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:32.019268    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:32.054778    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:32.054919    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:32.076797    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:32.076921    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:32.091729    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:32.091809    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:32.104030    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:32.104098    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:32.114973    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:32.115035    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:32.126127    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:32.126190    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:32.137220    9665 logs.go:276] 0 containers: []
	W0503 15:19:32.137233    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:32.137295    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:32.147590    9665 logs.go:276] 0 containers: []
	W0503 15:19:32.147602    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:32.147612    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:32.147618    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:32.152059    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:32.152067    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:32.171941    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:32.171951    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:32.184620    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:32.184630    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:32.196191    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:32.196204    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:32.214845    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:32.214858    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:32.228493    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:32.228504    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:32.251383    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:32.251390    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:32.289030    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:32.289036    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:32.322850    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:32.322860    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:32.337341    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:32.337353    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:32.354778    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:32.354786    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:32.368820    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:32.368830    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:32.386188    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:32.386200    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:32.398047    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:32.398059    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:31.393770    9866 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.144353125s)
	I0503 15:19:31.393783    9866 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0503 15:19:31.409704    9866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0503 15:19:31.413450    9866 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0503 15:19:31.418656    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:31.496488    9866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0503 15:19:33.109419    9866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.612949542s)
	I0503 15:19:33.109519    9866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0503 15:19:33.122212    9866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0503 15:19:33.122223    9866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0503 15:19:33.122229    9866 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0503 15:19:33.133818    9866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.133852    9866 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0503 15:19:33.133897    9866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:33.133975    9866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:33.134041    9866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:33.134106    9866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:33.134224    9866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:33.134252    9866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:19:33.143188    9866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:33.144588    9866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.144675    9866 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0503 15:19:33.148042    9866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:33.148063    9866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:33.148153    9866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:33.148173    9866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:33.148487    9866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0503 15:19:33.921053    9866 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0503 15:19:33.921632    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.958187    9866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0503 15:19:33.958236    9866 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.958336    9866 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.982706    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0503 15:19:33.982842    9866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0503 15:19:33.984706    9866 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0503 15:19:33.984723    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0503 15:19:34.010324    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0503 15:19:34.011109    9866 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0503 15:19:34.011116    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0503 15:19:34.021306    9866 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0503 15:19:34.021329    9866 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0503 15:19:34.021386    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0503 15:19:34.049399    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:34.082391    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:34.125346    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:34.185898    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:34.197386    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0503 15:19:34.205651    9866 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0503 15:19:34.205739    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:34.282792    9866 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0503 15:19:34.282827    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0503 15:19:34.282853    9866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0503 15:19:34.282870    9866 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:34.282881    9866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0503 15:19:34.282893    9866 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:34.282920    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:34.282924    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:34.282930    9866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0503 15:19:34.282943    9866 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:34.282933    9866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0503 15:19:34.282947    9866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0503 15:19:34.282958    9866 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:34.282963    9866 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0503 15:19:34.282970    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:34.282973    9866 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:19:34.282974    9866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0503 15:19:34.282983    9866 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:34.282989    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0503 15:19:34.282976    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:34.283003    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:34.318875    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0503 15:19:34.318917    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0503 15:19:34.318934    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0503 15:19:34.318941    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0503 15:19:34.318981    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0503 15:19:34.318998    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0503 15:19:34.319003    9866 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0503 15:19:34.319061    9866 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0503 15:19:34.319070    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0503 15:19:34.321344    9866 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0503 15:19:34.321362    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0503 15:19:34.334131    9866 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0503 15:19:34.334146    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0503 15:19:34.379399    9866 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0503 15:19:34.379424    9866 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0503 15:19:34.379432    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0503 15:19:34.420318    9866 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0503 15:19:34.420354    9866 cache_images.go:92] duration metric: took 1.298148917s to LoadCachedImages
	W0503 15:19:34.420396    9866 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0503 15:19:34.420402    9866 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0503 15:19:34.420451    9866 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-139000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0503 15:19:34.420517    9866 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0503 15:19:34.433755    9866 cni.go:84] Creating CNI manager for ""
	I0503 15:19:34.433767    9866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:19:34.433772    9866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0503 15:19:34.433780    9866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-139000 NodeName:stopped-upgrade-139000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0503 15:19:34.433839    9866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-139000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0503 15:19:34.433896    9866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0503 15:19:34.437266    9866 binaries.go:44] Found k8s binaries, skipping transfer
	I0503 15:19:34.437291    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0503 15:19:34.440137    9866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0503 15:19:34.445128    9866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0503 15:19:34.450475    9866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0503 15:19:34.456003    9866 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0503 15:19:34.457216    9866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0503 15:19:34.461224    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:34.538913    9866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:19:34.544249    9866 certs.go:68] Setting up /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000 for IP: 10.0.2.15
	I0503 15:19:34.544257    9866 certs.go:194] generating shared ca certs ...
	I0503 15:19:34.544266    9866 certs.go:226] acquiring lock for ca certs: {Name:mkd5f7db20634f49dfd68d117c1845d0b32f87c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.544423    9866 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.key
	I0503 15:19:34.544463    9866 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.key
	I0503 15:19:34.544468    9866 certs.go:256] generating profile certs ...
	I0503 15:19:34.544533    9866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.key
	I0503 15:19:34.544550    9866 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee
	I0503 15:19:34.544563    9866 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0503 15:19:34.620433    9866 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee ...
	I0503 15:19:34.620446    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee: {Name:mkfd69199119256217f07b88ee1c6751e2f6621c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.620788    9866 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee ...
	I0503 15:19:34.620798    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee: {Name:mkf29d39b02b6b149fcea2faecc622cbf616741c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.620932    9866 certs.go:381] copying /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee -> /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt
	I0503 15:19:34.621045    9866 certs.go:385] copying /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee -> /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key
	I0503 15:19:34.621172    9866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/proxy-client.key
	I0503 15:19:34.621289    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768.pem (1338 bytes)
	W0503 15:19:34.621310    9866 certs.go:480] ignoring /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768_empty.pem, impossibly tiny 0 bytes
	I0503 15:19:34.621315    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem (1675 bytes)
	I0503 15:19:34.621333    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem (1078 bytes)
	I0503 15:19:34.621350    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem (1123 bytes)
	I0503 15:19:34.621368    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem (1675 bytes)
	I0503 15:19:34.621407    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem (1708 bytes)
	I0503 15:19:34.621717    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0503 15:19:34.628919    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0503 15:19:34.635861    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0503 15:19:34.642807    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0503 15:19:34.650144    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0503 15:19:34.657092    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0503 15:19:34.664841    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0503 15:19:34.671755    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0503 15:19:34.678683    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem --> /usr/share/ca-certificates/77682.pem (1708 bytes)
	I0503 15:19:34.685717    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0503 15:19:34.693009    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768.pem --> /usr/share/ca-certificates/7768.pem (1338 bytes)
	I0503 15:19:34.699898    9866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0503 15:19:34.704799    9866 ssh_runner.go:195] Run: openssl version
	I0503 15:19:34.706714    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77682.pem && ln -fs /usr/share/ca-certificates/77682.pem /etc/ssl/certs/77682.pem"
	I0503 15:19:34.710017    9866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77682.pem
	I0503 15:19:34.711555    9866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  3 22:03 /usr/share/ca-certificates/77682.pem
	I0503 15:19:34.711576    9866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77682.pem
	I0503 15:19:34.713277    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77682.pem /etc/ssl/certs/3ec20f2e.0"
	I0503 15:19:34.716377    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0503 15:19:34.719270    9866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:19:34.720578    9866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  3 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:19:34.720595    9866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:19:34.722354    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0503 15:19:34.725817    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7768.pem && ln -fs /usr/share/ca-certificates/7768.pem /etc/ssl/certs/7768.pem"
	I0503 15:19:34.729237    9866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7768.pem
	I0503 15:19:34.730733    9866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  3 22:03 /usr/share/ca-certificates/7768.pem
	I0503 15:19:34.730751    9866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7768.pem
	I0503 15:19:34.732565    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7768.pem /etc/ssl/certs/51391683.0"
	I0503 15:19:34.735466    9866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0503 15:19:34.736762    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0503 15:19:34.739297    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0503 15:19:34.741112    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0503 15:19:34.743266    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0503 15:19:34.744982    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0503 15:19:34.746667    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0503 15:19:34.748470    9866 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:19:34.748538    9866 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0503 15:19:34.759073    9866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0503 15:19:34.762391    9866 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0503 15:19:34.762398    9866 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0503 15:19:34.762401    9866 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0503 15:19:34.762421    9866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0503 15:19:34.765694    9866 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:19:34.765978    9866 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-139000" does not appear in /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:19:34.766078    9866 kubeconfig.go:62] /Users/jenkins/minikube-integration/18793-7269/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-139000" cluster setting kubeconfig missing "stopped-upgrade-139000" context setting]
	I0503 15:19:34.766297    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.766719    9866 kapi.go:59] client config for stopped-upgrade-139000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f8fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:19:34.767033    9866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0503 15:19:34.770022    9866 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-139000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0503 15:19:34.770027    9866 kubeadm.go:1154] stopping kube-system containers ...
	I0503 15:19:34.770066    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0503 15:19:34.780892    9866 docker.go:483] Stopping containers: [ed9610f55b0b c5583124a53e 4475fda52f0c a482d8d0479c 5b7eb4ef241b c8917f86a920 85023bbf7f9e 273a3c9f75a6]
	I0503 15:19:34.780965    9866 ssh_runner.go:195] Run: docker stop ed9610f55b0b c5583124a53e 4475fda52f0c a482d8d0479c 5b7eb4ef241b c8917f86a920 85023bbf7f9e 273a3c9f75a6
	I0503 15:19:34.792107    9866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0503 15:19:34.797595    9866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:19:34.800634    9866 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 15:19:34.800646    9866 kubeadm.go:156] found existing configuration files:
	
	I0503 15:19:34.800673    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf
	I0503 15:19:34.803043    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 15:19:34.803061    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:19:34.805790    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf
	I0503 15:19:34.808701    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 15:19:34.808719    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:19:34.811112    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf
	I0503 15:19:34.813625    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 15:19:34.813647    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:19:34.816593    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf
	I0503 15:19:34.819037    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 15:19:34.819057    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:19:34.821986    9866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:19:34.825277    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:34.849437    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.712791    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.848988    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.869459    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.891063    9866 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:19:35.891142    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:19:34.911451    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:36.391332    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:19:36.893203    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:19:36.897557    9866 api_server.go:72] duration metric: took 1.006519083s to wait for apiserver process to appear ...
	I0503 15:19:36.897566    9866 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:19:36.897574    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:39.913531    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:39.913638    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:39.927180    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:39.927258    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:39.938423    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:39.938493    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:39.949923    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:39.949995    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:39.960581    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:39.960659    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:39.971208    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:39.971276    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:39.983419    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:39.983484    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:39.993806    9665 logs.go:276] 0 containers: []
	W0503 15:19:39.993818    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:39.993879    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:40.004248    9665 logs.go:276] 0 containers: []
	W0503 15:19:40.004260    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:40.004267    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:40.004272    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:40.017508    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:40.017521    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:40.028954    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:40.028965    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:40.043637    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:40.043648    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:40.061011    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:40.061021    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:40.078962    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:40.078973    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:40.093573    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:40.093588    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:40.097784    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:40.097791    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:40.111024    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:40.111039    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:40.129802    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:40.129815    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:40.141959    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:40.141970    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:40.159246    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:40.159256    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:40.182902    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:40.182909    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:40.194940    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:40.194950    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:40.235333    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:40.235344    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:42.771775    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:41.899626    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:41.899670    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:47.774413    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:47.774808    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:47.811347    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:47.811485    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:47.833051    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:47.833165    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:47.848399    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:47.848476    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:47.861056    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:47.861131    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:47.872366    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:47.872434    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:47.883035    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:47.883104    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:47.893551    9665 logs.go:276] 0 containers: []
	W0503 15:19:47.893562    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:47.893622    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:47.905408    9665 logs.go:276] 0 containers: []
	W0503 15:19:47.905420    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:47.905428    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:47.905433    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:47.919060    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:47.919074    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:47.931227    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:47.931243    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:47.954565    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:47.954576    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:47.969760    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:47.969771    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:47.991765    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:47.991777    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:48.015468    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:48.015487    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:48.050716    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:48.050728    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:48.055122    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:48.055129    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:48.069496    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:48.069512    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:48.081979    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:48.081991    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:48.096241    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:48.096252    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:48.113208    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:48.113218    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:48.151977    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:48.151984    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:48.165470    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:48.165479    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:46.899981    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:46.900066    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:50.678534    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:51.900867    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:51.900957    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:55.681069    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:55.681279    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:19:55.700427    9665 logs.go:276] 2 containers: [4630927f679e c58ec9465be1]
	I0503 15:19:55.700515    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:19:55.714983    9665 logs.go:276] 2 containers: [b1ec22c1bc96 f90094320501]
	I0503 15:19:55.715057    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:19:55.726890    9665 logs.go:276] 1 containers: [a802e211e244]
	I0503 15:19:55.726958    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:19:55.739144    9665 logs.go:276] 2 containers: [58d277afe448 c33b5f027877]
	I0503 15:19:55.739214    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:19:55.750005    9665 logs.go:276] 1 containers: [d906735280f1]
	I0503 15:19:55.750071    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:19:55.760644    9665 logs.go:276] 2 containers: [794f5cf7e82f 52384ec84857]
	I0503 15:19:55.760713    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:19:55.771583    9665 logs.go:276] 0 containers: []
	W0503 15:19:55.771596    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:19:55.771657    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:19:55.783286    9665 logs.go:276] 0 containers: []
	W0503 15:19:55.783297    9665 logs.go:278] No container was found matching "storage-provisioner"
	I0503 15:19:55.783304    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:19:55.783311    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:19:55.795719    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:19:55.795731    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:19:55.800470    9665 logs.go:123] Gathering logs for kube-apiserver [c58ec9465be1] ...
	I0503 15:19:55.800477    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c58ec9465be1"
	I0503 15:19:55.821652    9665 logs.go:123] Gathering logs for etcd [f90094320501] ...
	I0503 15:19:55.821675    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f90094320501"
	I0503 15:19:55.840468    9665 logs.go:123] Gathering logs for kube-proxy [d906735280f1] ...
	I0503 15:19:55.840482    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d906735280f1"
	I0503 15:19:55.853530    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:19:55.853542    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:19:55.878416    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:19:55.878432    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:19:55.919031    9665 logs.go:123] Gathering logs for kube-apiserver [4630927f679e] ...
	I0503 15:19:55.919044    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4630927f679e"
	I0503 15:19:55.937578    9665 logs.go:123] Gathering logs for kube-scheduler [c33b5f027877] ...
	I0503 15:19:55.937594    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c33b5f027877"
	I0503 15:19:55.952900    9665 logs.go:123] Gathering logs for kube-controller-manager [794f5cf7e82f] ...
	I0503 15:19:55.952914    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 794f5cf7e82f"
	I0503 15:19:55.975858    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:19:55.975874    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:19:56.018725    9665 logs.go:123] Gathering logs for etcd [b1ec22c1bc96] ...
	I0503 15:19:56.018742    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1ec22c1bc96"
	I0503 15:19:56.038084    9665 logs.go:123] Gathering logs for kube-controller-manager [52384ec84857] ...
	I0503 15:19:56.038097    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52384ec84857"
	I0503 15:19:56.052729    9665 logs.go:123] Gathering logs for coredns [a802e211e244] ...
	I0503 15:19:56.052742    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a802e211e244"
	I0503 15:19:56.065740    9665 logs.go:123] Gathering logs for kube-scheduler [58d277afe448] ...
	I0503 15:19:56.065756    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58d277afe448"
	I0503 15:19:58.580959    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:56.901828    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:56.901884    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:03.583540    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:03.583605    9665 kubeadm.go:591] duration metric: took 4m3.717208333s to restartPrimaryControlPlane
	W0503 15:20:03.583663    9665 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0503 15:20:03.583688    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0503 15:20:04.519818    9665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 15:20:04.524874    9665 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:20:04.527650    9665 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:20:04.530781    9665 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 15:20:04.530787    9665 kubeadm.go:156] found existing configuration files:
	
	I0503 15:20:04.530809    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/admin.conf
	I0503 15:20:04.533405    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 15:20:04.533428    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:20:04.535969    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/kubelet.conf
	I0503 15:20:04.538886    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 15:20:04.538907    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:20:04.541540    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/controller-manager.conf
	I0503 15:20:04.544154    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 15:20:04.544178    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:20:04.547246    9665 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/scheduler.conf
	I0503 15:20:04.550193    9665 kubeadm.go:162] "https://control-plane.minikube.internal:51188" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51188 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 15:20:04.550211    9665 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:20:04.552801    9665 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0503 15:20:04.570537    9665 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0503 15:20:04.570637    9665 kubeadm.go:309] [preflight] Running pre-flight checks
	I0503 15:20:04.619727    9665 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0503 15:20:04.619789    9665 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0503 15:20:04.619859    9665 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0503 15:20:04.673192    9665 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0503 15:20:04.676276    9665 out.go:204]   - Generating certificates and keys ...
	I0503 15:20:04.676336    9665 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0503 15:20:04.676376    9665 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0503 15:20:04.676416    9665 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0503 15:20:04.676447    9665 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0503 15:20:04.676487    9665 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0503 15:20:04.676544    9665 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0503 15:20:04.676623    9665 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0503 15:20:04.676701    9665 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0503 15:20:04.676818    9665 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0503 15:20:04.676861    9665 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0503 15:20:04.676907    9665 kubeadm.go:309] [certs] Using the existing "sa" key
	I0503 15:20:04.676942    9665 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0503 15:20:04.815428    9665 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0503 15:20:04.903023    9665 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0503 15:20:05.001530    9665 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0503 15:20:05.115989    9665 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0503 15:20:05.146688    9665 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0503 15:20:05.146782    9665 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0503 15:20:05.146818    9665 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0503 15:20:05.225836    9665 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0503 15:20:01.902817    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:01.902842    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:05.228586    9665 out.go:204]   - Booting up control plane ...
	I0503 15:20:05.228634    9665 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0503 15:20:05.228671    9665 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0503 15:20:05.228711    9665 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0503 15:20:05.228758    9665 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0503 15:20:05.228851    9665 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0503 15:20:09.730618    9665 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503350 seconds
	I0503 15:20:09.730755    9665 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0503 15:20:09.736987    9665 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0503 15:20:10.246507    9665 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0503 15:20:10.246626    9665 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-916000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0503 15:20:10.754352    9665 kubeadm.go:309] [bootstrap-token] Using token: dj6sbg.a4mz0vzy2cpqg7m8
	I0503 15:20:10.757654    9665 out.go:204]   - Configuring RBAC rules ...
	I0503 15:20:10.757722    9665 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0503 15:20:10.765676    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0503 15:20:10.768015    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0503 15:20:10.769363    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0503 15:20:10.770519    9665 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0503 15:20:10.771582    9665 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0503 15:20:10.776031    9665 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0503 15:20:10.964160    9665 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0503 15:20:11.167342    9665 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0503 15:20:11.167851    9665 kubeadm.go:309] 
	I0503 15:20:11.167883    9665 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0503 15:20:11.167906    9665 kubeadm.go:309] 
	I0503 15:20:11.167956    9665 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0503 15:20:11.167976    9665 kubeadm.go:309] 
	I0503 15:20:11.167995    9665 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0503 15:20:11.168041    9665 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0503 15:20:11.168071    9665 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0503 15:20:11.168074    9665 kubeadm.go:309] 
	I0503 15:20:11.168111    9665 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0503 15:20:11.168178    9665 kubeadm.go:309] 
	I0503 15:20:11.168235    9665 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0503 15:20:11.168241    9665 kubeadm.go:309] 
	I0503 15:20:11.168281    9665 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0503 15:20:11.168332    9665 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0503 15:20:11.168402    9665 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0503 15:20:11.168409    9665 kubeadm.go:309] 
	I0503 15:20:11.168449    9665 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0503 15:20:11.168581    9665 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0503 15:20:11.168592    9665 kubeadm.go:309] 
	I0503 15:20:11.168632    9665 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token dj6sbg.a4mz0vzy2cpqg7m8 \
	I0503 15:20:11.168684    9665 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 \
	I0503 15:20:11.168702    9665 kubeadm.go:309] 	--control-plane 
	I0503 15:20:11.168704    9665 kubeadm.go:309] 
	I0503 15:20:11.168750    9665 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0503 15:20:11.168753    9665 kubeadm.go:309] 
	I0503 15:20:11.168834    9665 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token dj6sbg.a4mz0vzy2cpqg7m8 \
	I0503 15:20:11.168893    9665 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 
	I0503 15:20:11.168965    9665 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0503 15:20:11.168973    9665 cni.go:84] Creating CNI manager for ""
	I0503 15:20:11.168981    9665 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:20:11.173406    9665 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0503 15:20:06.903890    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:06.903915    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:11.180358    9665 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0503 15:20:11.183361    9665 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0503 15:20:11.189595    9665 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0503 15:20:11.189663    9665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 15:20:11.189712    9665 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-916000 minikube.k8s.io/updated_at=2024_05_03T15_20_11_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a minikube.k8s.io/name=running-upgrade-916000 minikube.k8s.io/primary=true
	I0503 15:20:11.234677    9665 ops.go:34] apiserver oom_adj: -16
	I0503 15:20:11.234685    9665 kubeadm.go:1107] duration metric: took 45.060458ms to wait for elevateKubeSystemPrivileges
	W0503 15:20:11.234708    9665 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0503 15:20:11.234712    9665 kubeadm.go:393] duration metric: took 4m11.383673167s to StartCluster
	I0503 15:20:11.234721    9665 settings.go:142] acquiring lock: {Name:mkee9fdcf0e1a69d3ca7e09bf6e6cf0362ae7cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:20:11.234897    9665 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:20:11.235279    9665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:20:11.235484    9665 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:20:11.238318    9665 out.go:177] * Verifying Kubernetes components...
	I0503 15:20:11.235572    9665 config.go:182] Loaded profile config "running-upgrade-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:20:11.235562    9665 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0503 15:20:11.246322    9665 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-916000"
	I0503 15:20:11.246335    9665 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-916000"
	W0503 15:20:11.246338    9665 addons.go:243] addon storage-provisioner should already be in state true
	I0503 15:20:11.246357    9665 host.go:66] Checking if "running-upgrade-916000" exists ...
	I0503 15:20:11.246378    9665 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:20:11.246388    9665 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-916000"
	I0503 15:20:11.246401    9665 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-916000"
	I0503 15:20:11.247378    9665 kapi.go:59] client config for running-upgrade-916000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/running-upgrade-916000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eefcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:20:11.247496    9665 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-916000"
	W0503 15:20:11.247501    9665 addons.go:243] addon default-storageclass should already be in state true
	I0503 15:20:11.247507    9665 host.go:66] Checking if "running-upgrade-916000" exists ...
	I0503 15:20:11.252301    9665 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:20:11.256356    9665 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:20:11.256362    9665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0503 15:20:11.256368    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:20:11.256962    9665 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0503 15:20:11.256967    9665 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0503 15:20:11.256971    9665 sshutil.go:53] new ssh client: &{IP:localhost Port:51156 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/running-upgrade-916000/id_rsa Username:docker}
	I0503 15:20:11.331726    9665 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:20:11.336751    9665 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:20:11.336798    9665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:20:11.340960    9665 api_server.go:72] duration metric: took 105.467333ms to wait for apiserver process to appear ...
	I0503 15:20:11.340968    9665 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:20:11.340976    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:11.364440    9665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0503 15:20:11.364815    9665 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:20:11.905244    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:11.905296    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:16.343077    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:16.343192    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:16.907171    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:16.907260    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:21.343783    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:21.343803    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:21.909613    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:21.909655    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:26.344546    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:26.344577    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:26.911824    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:26.911878    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:31.345137    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:31.345162    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:31.913645    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:31.913687    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:36.345940    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:36.345960    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:36.915859    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:36.916153    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:20:36.945787    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:20:36.945918    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:20:36.963506    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:20:36.963596    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:20:36.978028    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:20:36.978100    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:20:36.989798    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:20:36.989871    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:20:37.000755    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:20:37.000822    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:20:37.011863    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:20:37.011932    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:20:37.022024    9866 logs.go:276] 0 containers: []
	W0503 15:20:37.022038    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:20:37.022096    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:20:37.033190    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:20:37.033208    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:20:37.033213    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:20:37.072763    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:20:37.072774    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:20:37.087552    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:20:37.087565    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:20:37.137402    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:20:37.137414    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:20:37.148854    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:20:37.148867    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:20:37.164730    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:20:37.164746    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:20:37.177784    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:20:37.177796    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:20:37.197481    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:20:37.197493    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:20:37.209774    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:20:37.209786    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:20:37.221156    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:20:37.221168    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:20:37.233172    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:20:37.233185    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:20:37.244708    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:20:37.244719    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:20:37.346416    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:20:37.346427    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:20:37.362105    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:20:37.362116    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:20:37.379944    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:20:37.379958    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:20:37.384740    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:20:37.384747    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:20:37.398883    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:20:37.398900    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:20:39.926943    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:41.346845    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:41.346905    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0503 15:20:41.716635    9665 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0503 15:20:41.721940    9665 out.go:177] * Enabled addons: storage-provisioner
	I0503 15:20:41.733810    9665 addons.go:505] duration metric: took 30.498989084s for enable addons: enabled=[storage-provisioner]
	I0503 15:20:44.927822    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:44.928217    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:20:44.969304    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:20:44.969432    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:20:44.991236    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:20:44.991339    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:20:45.004367    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:20:45.004436    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:20:45.016354    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:20:45.016431    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:20:45.027787    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:20:45.027855    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:20:45.039068    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:20:45.039131    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:20:45.049532    9866 logs.go:276] 0 containers: []
	W0503 15:20:45.049545    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:20:45.049609    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:20:45.060892    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:20:45.060922    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:20:45.060928    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:20:45.100418    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:20:45.100433    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:20:45.138442    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:20:45.138453    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:20:45.151883    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:20:45.151894    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:20:45.163012    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:20:45.163023    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:20:45.176544    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:20:45.176556    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:20:45.194839    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:20:45.194853    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:20:45.210192    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:20:45.210206    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:20:45.222702    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:20:45.222712    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:20:45.247546    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:20:45.247554    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:20:45.251664    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:20:45.251670    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:20:45.265879    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:20:45.265889    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:20:45.277804    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:20:45.277814    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:20:45.315732    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:20:45.315740    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:20:45.327081    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:20:45.327093    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:20:45.340021    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:20:45.340032    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:20:45.357116    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:20:45.357128    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:20:46.348222    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:46.348248    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:47.877879    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:51.349891    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:51.349939    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:52.880226    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:52.880478    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:20:52.902383    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:20:52.902480    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:20:52.917720    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:20:52.917801    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:20:52.930112    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:20:52.930191    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:20:52.943724    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:20:52.943782    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:20:52.953807    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:20:52.953875    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:20:52.964182    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:20:52.964249    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:20:52.974832    9866 logs.go:276] 0 containers: []
	W0503 15:20:52.974843    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:20:52.974897    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:20:52.985421    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:20:52.985442    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:20:52.985448    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:20:52.990066    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:20:52.990072    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:20:53.026975    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:20:53.026993    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:20:53.041762    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:20:53.041776    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:20:53.055186    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:20:53.055204    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:20:53.066630    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:20:53.066642    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:20:53.078262    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:20:53.078276    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:20:53.116204    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:20:53.116212    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:20:53.129773    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:20:53.129783    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:20:53.143808    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:20:53.143817    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:20:53.155067    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:20:53.155077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:20:53.171498    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:20:53.171509    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:20:53.182805    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:20:53.182819    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:20:53.194849    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:20:53.194865    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:20:53.206650    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:20:53.206660    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:20:53.231367    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:20:53.231374    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:20:53.268212    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:20:53.268223    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:20:55.784644    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:56.352148    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:56.352188    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:00.786900    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:00.787091    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:00.813435    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:00.813543    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:00.827289    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:00.827365    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:00.842391    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:00.842467    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:00.852676    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:00.852762    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:00.862993    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:00.863060    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:00.879981    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:00.880053    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:00.890523    9866 logs.go:276] 0 containers: []
	W0503 15:21:00.890535    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:00.890604    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:00.901564    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:00.901583    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:00.901589    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:00.939166    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:00.939179    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:00.977050    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:00.977065    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:00.991933    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:00.991944    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:00.996601    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:00.996606    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:01.010347    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:01.010358    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:01.021509    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:01.021524    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:01.034781    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:01.034793    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:01.049765    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:01.049780    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:01.069839    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:01.069850    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:01.083629    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:01.083640    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:01.095936    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:01.095947    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:01.110185    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:01.110196    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:01.144552    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:01.144564    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:01.159843    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:01.159857    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:01.170880    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:01.170892    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:01.354304    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:01.354329    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:01.194744    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:01.198297    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:03.712890    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:06.354703    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:06.354733    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:08.715423    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:08.715630    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:08.730169    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:08.730263    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:08.742376    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:08.742454    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:08.752904    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:08.752968    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:08.763774    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:08.763854    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:08.778604    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:08.778671    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:08.789219    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:08.789279    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:08.799522    9866 logs.go:276] 0 containers: []
	W0503 15:21:08.799533    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:08.799590    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:08.810027    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:08.810047    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:08.810052    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:08.827392    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:08.827402    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:08.864942    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:08.864952    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:08.869371    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:08.869379    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:08.903208    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:08.903220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:08.917376    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:08.917388    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:08.929337    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:08.929352    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:08.941175    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:08.941187    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:08.958667    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:08.958677    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:08.969607    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:08.969619    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:08.981409    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:08.981421    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:09.019110    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:09.019124    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:09.043312    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:09.043325    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:09.057921    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:09.057935    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:09.071581    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:09.071592    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:09.086402    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:09.086412    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:09.097456    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:09.097467    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:11.356829    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:11.357023    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:11.397459    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:11.397545    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:11.410662    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:11.410738    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:11.426690    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:11.426773    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:11.437169    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:11.437228    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:11.448152    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:11.448227    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:11.458886    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:11.458956    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:11.469005    9665 logs.go:276] 0 containers: []
	W0503 15:21:11.469016    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:11.469076    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:11.479446    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:11.479461    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:11.479469    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:11.504715    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:11.504725    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:11.528989    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:11.528996    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:11.533232    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:11.533241    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:11.568341    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:11.568353    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:11.583029    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:11.583041    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:11.595725    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:11.595735    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:11.610202    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:11.610215    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:11.622598    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:11.622611    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:11.634085    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:11.634096    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:11.645262    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:11.645272    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:11.681649    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:11.681661    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:11.695715    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:11.695727    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:14.210124    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:11.610685    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:19.212350    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:19.212572    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:19.228431    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:19.228515    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:19.242259    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:19.242333    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:19.253340    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:19.253408    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:19.263803    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:19.263874    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:19.274556    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:19.274618    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:19.285000    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:19.285060    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:19.295039    9665 logs.go:276] 0 containers: []
	W0503 15:21:19.295051    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:19.295099    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:19.305577    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:19.305591    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:19.305597    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:19.344592    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:19.344604    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:19.359179    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:19.359192    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:19.370565    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:19.370579    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:19.386358    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:19.386370    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:16.612801    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:16.612969    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:16.629037    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:16.629125    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:16.641565    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:16.641635    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:16.652113    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:16.652186    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:16.662744    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:16.662818    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:16.673008    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:16.673080    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:16.683407    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:16.683479    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:16.694088    9866 logs.go:276] 0 containers: []
	W0503 15:21:16.694105    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:16.694166    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:16.704304    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:16.704322    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:16.704327    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:16.718124    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:16.718135    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:16.729845    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:16.729860    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:16.767661    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:16.767673    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:16.779688    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:16.779701    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:16.793119    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:16.793129    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:16.804816    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:16.804828    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:16.819614    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:16.819624    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:16.833346    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:16.833356    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:16.837891    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:16.837897    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:16.851935    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:16.851945    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:16.889289    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:16.889305    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:16.907209    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:16.907220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:16.920262    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:16.920272    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:16.931646    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:16.931659    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:16.946925    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:16.946936    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:16.970827    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:16.970836    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:19.509139    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:19.398013    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:19.398023    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:19.434216    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:19.434229    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:19.439068    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:19.439074    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:19.451008    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:19.451022    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:19.468131    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:19.468141    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:19.491736    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:19.491745    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:19.503119    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:19.503131    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:19.516877    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:19.516887    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:22.030764    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:24.511251    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:24.511480    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:24.540953    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:24.541053    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:24.558023    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:24.558099    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:24.571024    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:24.571096    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:24.581993    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:24.582055    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:24.595718    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:24.595786    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:24.606109    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:24.606169    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:24.627725    9866 logs.go:276] 0 containers: []
	W0503 15:21:24.627739    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:24.627798    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:24.639488    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:24.639506    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:24.639511    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:24.677755    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:24.677767    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:24.715931    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:24.715944    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:24.730944    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:24.730956    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:24.747956    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:24.747966    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:24.760888    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:24.760901    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:24.799454    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:24.799463    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:24.804121    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:24.804129    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:24.815636    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:24.815647    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:24.827259    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:24.827269    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:24.841445    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:24.841456    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:24.856011    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:24.856021    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:24.866733    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:24.866745    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:24.878390    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:24.878400    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:24.892161    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:24.892171    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:24.906324    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:24.906334    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:24.917680    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:24.917695    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:27.033168    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:27.033543    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:27.071785    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:27.071961    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:27.090129    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:27.090222    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:27.103498    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:27.103571    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:27.115263    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:27.115338    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:27.125899    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:27.125973    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:27.136364    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:27.136438    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:27.150511    9665 logs.go:276] 0 containers: []
	W0503 15:21:27.150522    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:27.150574    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:27.160779    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:27.160792    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:27.160797    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:27.184965    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:27.184972    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:27.189458    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:27.189465    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:27.203747    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:27.203760    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:27.217179    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:27.217193    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:27.228568    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:27.228582    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:27.246313    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:27.246324    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:27.257753    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:27.257767    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:27.294083    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:27.294092    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:27.329252    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:27.329264    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:27.347016    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:27.347028    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:27.361550    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:27.361561    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:27.373184    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:27.373197    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:27.442710    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:29.885944    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:32.445030    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:32.445407    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:32.480967    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:32.481100    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:32.501579    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:32.501664    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:32.516699    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:32.516781    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:32.529074    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:32.529146    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:32.540956    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:32.541022    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:32.556338    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:32.556401    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:32.567687    9866 logs.go:276] 0 containers: []
	W0503 15:21:32.567700    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:32.567756    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:32.578561    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:32.578578    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:32.578584    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:32.590475    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:32.590488    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:32.604942    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:32.604954    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:32.616920    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:32.616931    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:32.621097    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:32.621106    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:32.658245    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:32.658257    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:32.670756    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:32.670769    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:32.688216    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:32.688226    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:32.700222    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:32.700234    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:32.725612    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:32.725621    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:32.762569    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:32.762580    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:32.777245    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:32.777258    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:32.789150    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:32.789163    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:32.827724    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:32.827736    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:32.845083    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:32.845095    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:32.859251    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:32.859262    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:32.873887    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:32.873900    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:35.390654    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:34.888324    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:34.888532    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:34.910874    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:34.910978    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:34.923992    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:34.924067    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:34.935732    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:34.935796    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:34.946416    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:34.946480    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:34.956381    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:34.956452    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:34.966418    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:34.966479    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:34.976713    9665 logs.go:276] 0 containers: []
	W0503 15:21:34.976724    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:34.976789    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:34.987336    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:34.987350    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:34.987355    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:34.999244    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:34.999254    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:35.017716    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:35.017728    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:35.041702    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:35.041714    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:35.053340    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:35.053351    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:35.088491    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:35.088503    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:35.103406    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:35.103420    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:35.117263    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:35.117274    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:35.128779    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:35.128793    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:35.146003    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:35.146014    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:35.182749    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:35.182759    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:35.187297    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:35.187306    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:35.201195    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:35.201208    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:37.717732    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:40.392821    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:40.393198    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:40.422272    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:40.422396    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:40.444315    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:40.444406    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:40.457580    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:40.457648    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:40.469505    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:40.469568    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:40.479962    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:40.480029    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:40.490601    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:40.490666    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:40.500478    9866 logs.go:276] 0 containers: []
	W0503 15:21:40.500490    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:40.500549    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:40.510665    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:40.510682    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:40.510688    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:40.528051    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:40.528064    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:40.547841    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:40.547853    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:40.559465    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:40.559476    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:40.598562    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:40.598573    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:40.602705    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:40.602710    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:40.616490    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:40.616502    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:40.631111    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:40.631122    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:40.643749    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:40.643760    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:40.680037    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:40.680052    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:40.704617    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:40.704625    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:40.719700    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:40.719712    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:40.758152    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:40.758165    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:40.770432    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:40.770442    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:40.781472    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:40.781483    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:40.816830    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:40.816842    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:40.828678    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:40.828692    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:42.719981    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:42.720124    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:42.732977    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:42.733043    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:42.744042    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:42.744111    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:42.757264    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:42.757333    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:42.767554    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:42.767621    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:42.778183    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:42.778258    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:42.789116    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:42.789196    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:42.799331    9665 logs.go:276] 0 containers: []
	W0503 15:21:42.799344    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:42.799403    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:42.809209    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:42.809222    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:42.809228    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:42.820872    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:42.820885    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:42.838210    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:42.838220    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:42.850361    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:42.850372    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:42.855329    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:42.855337    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:42.869814    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:42.869825    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:42.884140    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:42.884151    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:42.902856    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:42.902869    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:42.914559    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:42.914568    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:42.926093    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:42.926104    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:42.941398    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:42.941409    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:42.964727    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:42.964734    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:42.999394    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:42.999401    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:43.344354    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:45.541341    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:48.346542    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:48.346726    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:48.362314    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:48.362397    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:48.374547    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:48.374617    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:48.385263    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:48.385328    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:48.398927    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:48.398997    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:48.409178    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:48.409244    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:48.419952    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:48.420017    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:48.430838    9866 logs.go:276] 0 containers: []
	W0503 15:21:48.430850    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:48.430909    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:48.441369    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:48.441388    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:48.441393    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:48.445831    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:48.445836    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:48.460487    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:48.460496    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:48.473096    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:48.473108    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:48.489505    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:48.489517    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:48.500706    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:48.500717    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:48.537230    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:48.537239    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:48.554249    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:48.554258    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:48.568400    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:48.568414    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:48.580145    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:48.580157    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:48.596434    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:48.596446    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:48.608207    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:48.608220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:48.645871    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:48.645880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:48.657670    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:48.657679    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:48.683000    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:48.683010    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:48.707436    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:48.707443    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:48.719612    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:48.719625    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:50.544042    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:50.544432    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:50.572851    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:50.572975    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:50.591152    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:50.591231    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:50.604567    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:50.604636    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:50.615632    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:50.615713    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:50.626494    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:50.626561    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:50.636911    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:50.636980    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:50.648581    9665 logs.go:276] 0 containers: []
	W0503 15:21:50.648593    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:50.648655    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:50.659160    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:50.659174    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:50.659179    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:50.673736    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:50.673750    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:50.685472    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:50.685486    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:50.710585    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:50.710593    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:50.722438    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:50.722449    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:50.762853    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:50.762868    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:50.776925    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:50.776939    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:50.788756    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:50.788767    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:50.800606    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:50.800620    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:50.812069    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:50.812079    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:50.847978    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:50.847985    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:50.852358    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:50.852364    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:50.865994    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:50.866008    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:53.385260    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:51.261520    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:58.387509    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:58.387763    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:58.413275    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:21:58.413392    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:58.430175    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:21:58.430264    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:58.443702    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:21:58.443770    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:58.454989    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:21:58.455059    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:58.465453    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:21:58.465519    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:58.476766    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:21:58.476830    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:58.487715    9665 logs.go:276] 0 containers: []
	W0503 15:21:58.487727    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:58.487781    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:58.498303    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:21:58.498318    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:58.498324    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:58.503356    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:21:58.503366    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:21:58.517566    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:21:58.517576    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:21:58.533624    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:21:58.533637    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:21:58.545400    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:21:58.545409    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:21:58.563437    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:21:58.563448    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:21:58.575261    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:21:58.575271    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:58.586522    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:58.586531    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:58.622907    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:58.622916    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:58.665754    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:21:58.665766    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:21:58.680477    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:21:58.680492    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:21:58.692530    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:21:58.692545    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:21:58.707372    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:58.707385    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:56.263645    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:56.263785    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:56.276623    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:56.276708    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:56.287966    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:56.288039    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:56.298953    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:56.299021    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:56.309767    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:56.309834    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:56.320354    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:56.320415    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:56.330871    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:56.330933    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:56.340989    9866 logs.go:276] 0 containers: []
	W0503 15:21:56.341001    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:56.341062    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:56.351481    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:56.351500    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:56.351505    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:56.390700    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:56.390711    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:56.402063    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:56.402077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:56.415467    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:56.415477    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:56.428780    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:56.428793    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:56.440010    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:56.440020    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:56.450941    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:56.450957    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:56.473590    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:56.473597    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:56.510310    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:56.510321    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:56.522027    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:56.522038    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:56.536179    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:56.536191    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:56.550888    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:56.550896    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:56.563061    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:56.563071    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:56.578327    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:56.578341    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:56.582366    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:56.582372    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:56.597722    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:56.597733    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:56.615631    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:56.615641    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:59.152516    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:01.233540    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:04.154689    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:04.154978    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:04.182556    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:04.182684    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:04.201037    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:04.201141    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:04.214711    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:04.214782    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:04.225948    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:04.226015    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:04.236839    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:04.236912    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:04.247074    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:04.247138    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:04.257044    9866 logs.go:276] 0 containers: []
	W0503 15:22:04.257058    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:04.257112    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:04.267317    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:04.267334    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:04.267339    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:04.285718    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:04.285728    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:04.297726    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:04.297738    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:04.334491    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:04.334503    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:04.348718    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:04.348728    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:04.363348    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:04.363364    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:04.378052    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:04.378063    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:04.390302    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:04.390314    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:04.428501    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:04.428515    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:04.467223    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:04.467232    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:04.481818    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:04.481827    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:04.506542    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:04.506551    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:04.517724    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:04.517736    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:04.533949    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:04.533958    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:04.538177    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:04.538186    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:04.549410    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:04.549421    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:04.561504    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:04.561514    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:06.234120    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:06.234264    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:06.246377    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:06.246448    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:06.256589    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:06.256661    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:06.266831    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:22:06.266904    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:06.277630    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:06.277698    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:06.289202    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:06.289268    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:06.300211    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:06.300270    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:06.310966    9665 logs.go:276] 0 containers: []
	W0503 15:22:06.310976    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:06.311028    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:06.324637    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:06.324654    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:06.324659    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:06.362073    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:06.362087    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:06.377591    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:06.377603    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:06.391165    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:06.391178    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:06.407110    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:06.407125    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:06.420629    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:06.420644    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:06.434821    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:06.434835    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:06.446141    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:06.446155    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:06.451194    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:06.451202    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:06.463096    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:06.463109    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:06.475102    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:06.475113    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:06.493079    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:06.493089    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:06.517997    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:06.518007    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:09.054926    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:07.074806    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:14.057121    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:14.057238    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:14.072009    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:14.072075    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:14.082823    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:14.082893    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:14.093434    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:22:14.093500    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:14.103722    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:14.103792    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:14.117418    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:14.117489    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:14.127529    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:14.127590    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:14.137423    9665 logs.go:276] 0 containers: []
	W0503 15:22:14.137434    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:14.137489    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:14.147627    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:14.147642    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:14.147647    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:14.158981    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:14.158994    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:14.195732    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:14.195743    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:14.229980    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:14.229991    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:14.242176    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:14.242188    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:14.253674    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:14.253686    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:14.268394    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:14.268409    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:14.280504    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:14.280519    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:14.302235    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:14.302245    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:14.325812    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:14.325819    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:14.330430    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:14.330436    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:14.344764    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:14.344775    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:14.366565    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:14.366574    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:12.077001    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:12.077094    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:12.087913    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:12.087986    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:12.098564    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:12.098635    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:12.109133    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:12.109203    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:12.121057    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:12.121126    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:12.131427    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:12.131497    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:12.142072    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:12.142152    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:12.152164    9866 logs.go:276] 0 containers: []
	W0503 15:22:12.152176    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:12.152246    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:12.168385    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:12.168404    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:12.168410    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:12.173043    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:12.173049    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:12.184288    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:12.184300    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:12.198351    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:12.198364    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:12.236419    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:12.236429    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:12.250264    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:12.250277    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:12.262224    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:12.262236    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:12.276976    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:12.276987    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:12.301434    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:12.301451    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:12.339109    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:12.339119    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:12.374062    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:12.374074    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:12.391909    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:12.391920    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:12.404942    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:12.404956    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:12.416075    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:12.416086    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:12.427926    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:12.427937    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:12.439752    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:12.439761    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:12.455039    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:12.455050    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:14.970956    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:16.880979    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:19.973163    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:19.973427    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:19.999684    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:19.999811    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:20.016879    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:20.016971    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:20.029992    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:20.030069    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:20.041796    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:20.041868    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:20.055873    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:20.055940    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:20.066311    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:20.066370    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:20.077009    9866 logs.go:276] 0 containers: []
	W0503 15:22:20.077023    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:20.077076    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:20.087505    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:20.087522    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:20.087527    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:20.099062    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:20.099077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:20.110425    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:20.110436    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:20.121980    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:20.121991    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:20.126180    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:20.126186    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:20.140101    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:20.140110    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:20.151916    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:20.151927    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:20.174505    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:20.174515    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:20.191225    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:20.191240    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:20.202789    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:20.202799    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:20.220504    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:20.220521    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:20.233702    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:20.233713    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:20.272755    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:20.272765    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:20.310002    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:20.310013    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:20.321501    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:20.321514    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:20.336077    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:20.336088    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:20.371556    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:20.371567    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:21.883241    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:21.883415    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:21.903933    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:21.904043    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:21.918579    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:21.918656    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:21.935900    9665 logs.go:276] 2 containers: [c10faa2971c4 d411f2d0da0d]
	I0503 15:22:21.935970    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:21.951150    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:21.951224    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:21.964038    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:21.964109    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:21.973988    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:21.974055    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:21.983810    9665 logs.go:276] 0 containers: []
	W0503 15:22:21.983820    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:21.983876    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:21.995263    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:21.995278    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:21.995283    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:22.035033    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:22.035045    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:22.049804    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:22.049814    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:22.063918    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:22.063934    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:22.075926    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:22.075940    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:22.088288    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:22.088300    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:22.106000    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:22.106014    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:22.117550    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:22.117559    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:22.140912    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:22.140923    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:22.175942    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:22.175952    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:22.180090    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:22.180098    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:22.191870    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:22.191881    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:22.203463    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:22.203476    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:22.885754    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:24.719327    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:27.888179    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:27.888631    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:27.930927    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:27.931060    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:27.954051    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:27.954142    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:27.969042    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:27.969121    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:27.983187    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:27.983261    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:27.993584    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:27.993646    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:28.004921    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:28.004991    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:28.015539    9866 logs.go:276] 0 containers: []
	W0503 15:22:28.015551    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:28.015607    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:28.029278    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:28.029297    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:28.029302    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:28.046917    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:28.046929    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:28.063220    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:28.063231    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:28.079886    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:28.079902    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:28.117820    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:28.117832    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:28.152920    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:28.152934    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:28.166836    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:28.166848    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:28.180222    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:28.180234    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:28.194476    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:28.194489    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:28.198624    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:28.198631    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:28.213268    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:28.213278    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:28.224603    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:28.224614    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:28.235943    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:28.235953    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:28.272981    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:28.272996    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:28.285121    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:28.285131    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:28.302618    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:28.302629    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:28.325298    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:28.325304    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:30.839441    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:29.721569    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:29.721815    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:29.748830    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:29.748951    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:29.768964    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:29.769035    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:29.782027    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:29.782108    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:29.793918    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:29.793984    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:29.804198    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:29.804264    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:29.814730    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:29.814796    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:29.824434    9665 logs.go:276] 0 containers: []
	W0503 15:22:29.824456    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:29.824505    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:29.834900    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:29.834915    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:29.834921    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:29.870499    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:29.870509    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:29.882808    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:29.882819    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:29.897257    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:29.897269    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:29.908976    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:29.908988    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:29.923296    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:29.923307    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:29.934756    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:29.934769    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:29.947196    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:29.947206    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:29.958580    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:29.958593    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:29.995175    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:29.995185    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:30.009609    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:30.009622    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:30.022099    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:30.022111    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:30.046815    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:30.046824    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:30.051402    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:30.051409    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:30.063995    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:30.064008    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:32.583877    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:35.842052    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:35.842303    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:35.876775    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:35.876919    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:35.901656    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:35.901754    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:35.917133    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:35.917213    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:35.933522    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:35.933591    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:35.945757    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:35.945828    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:35.956569    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:35.956636    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:35.967181    9866 logs.go:276] 0 containers: []
	W0503 15:22:35.967199    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:35.967259    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:35.977907    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:35.977925    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:35.977930    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:35.989872    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:35.989884    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:36.003731    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:36.003742    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:36.015910    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:36.015921    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:36.030599    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:36.030609    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:36.067690    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:36.067703    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:36.082455    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:36.082471    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:36.103435    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:36.103457    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:36.110814    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:36.110827    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:36.127023    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:36.127035    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:36.139167    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:36.139177    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:36.162183    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:36.162206    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:36.181700    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:36.181710    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:37.586182    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:37.586390    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:37.608283    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:37.608370    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:37.622321    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:37.622392    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:37.634168    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:37.634235    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:37.644818    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:37.644879    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:37.659229    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:37.659293    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:37.669419    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:37.669485    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:37.680088    9665 logs.go:276] 0 containers: []
	W0503 15:22:37.680099    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:37.680158    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:37.691319    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:37.691335    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:37.691340    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:37.695782    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:37.695791    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:37.709817    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:37.709831    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:37.721920    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:37.721934    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:37.733356    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:37.733366    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:37.745828    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:37.745839    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:37.783877    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:37.783887    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:37.822644    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:37.822657    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:37.836988    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:37.836999    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:37.848463    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:37.848476    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:37.862876    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:37.862889    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:37.888143    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:37.888153    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:37.902537    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:37.902548    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:37.914245    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:37.914256    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:37.925650    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:37.925663    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:36.219970    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:36.219981    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:36.238835    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:36.238849    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:36.250003    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:36.250016    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:36.261595    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:36.261611    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:38.801690    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:40.448755    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:43.804248    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:43.804511    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:43.831134    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:43.831253    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:43.848715    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:43.848802    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:43.861767    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:43.861836    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:43.876377    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:43.876441    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:43.886309    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:43.886369    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:43.896531    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:43.896598    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:43.906855    9866 logs.go:276] 0 containers: []
	W0503 15:22:43.906867    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:43.906925    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:43.917209    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:43.917228    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:43.917233    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:43.928246    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:43.928259    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:43.944533    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:43.944544    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:43.955708    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:43.955720    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:43.967602    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:43.967617    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:43.981289    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:43.981300    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:43.995866    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:43.995880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:44.007066    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:44.007079    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:44.018690    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:44.018700    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:44.031598    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:44.031609    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:44.071267    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:44.071278    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:44.107760    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:44.107772    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:44.122263    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:44.122277    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:44.140871    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:44.140881    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:44.168432    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:44.168441    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:44.172649    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:44.172656    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:44.211363    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:44.211384    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:45.451415    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:45.451899    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:45.488677    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:45.488813    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:45.518293    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:45.518392    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:45.531951    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:45.532031    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:45.543047    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:45.543116    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:45.553920    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:45.553985    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:45.565947    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:45.566015    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:45.576290    9665 logs.go:276] 0 containers: []
	W0503 15:22:45.576303    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:45.576365    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:45.591261    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:45.591278    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:45.591283    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:45.603245    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:45.603256    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:45.614585    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:45.614596    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:45.626020    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:45.626030    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:45.661369    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:45.661380    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:45.683022    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:45.683032    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:45.706247    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:45.706254    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:45.717682    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:45.717697    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:45.732691    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:45.732701    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:45.769363    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:45.769375    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:45.784357    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:45.784369    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:45.803640    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:45.803651    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:45.815077    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:45.815090    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:45.826255    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:45.826265    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:45.838168    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:45.838181    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:48.343431    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:46.731360    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:53.344745    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:53.345115    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:53.378200    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:22:53.378327    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:53.397880    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:22:53.397968    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:53.412005    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:22:53.412082    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:53.423856    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:22:53.423917    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:53.434453    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:22:53.434524    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:53.448594    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:22:53.448657    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:53.458855    9665 logs.go:276] 0 containers: []
	W0503 15:22:53.458868    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:53.458925    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:53.469156    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:22:53.469175    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:53.469181    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:53.504298    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:22:53.504306    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:22:53.518206    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:22:53.518217    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:22:53.529835    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:53.529846    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:53.534575    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:22:53.534583    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:22:53.546267    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:22:53.546280    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:22:53.558757    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:22:53.558772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:22:53.570615    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:22:53.570627    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:22:53.588178    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:22:53.588190    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:22:53.602024    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:53.602035    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:53.638897    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:22:53.638911    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:22:53.653316    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:22:53.653329    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:22:53.664720    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:22:53.664731    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:22:53.685949    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:53.685959    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:53.710276    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:22:53.710283    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:51.733710    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:51.734088    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:51.771942    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:51.772078    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:51.792632    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:51.792723    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:51.807084    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:51.807160    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:51.823590    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:51.823665    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:51.834018    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:51.834092    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:51.844880    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:51.844952    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:51.854879    9866 logs.go:276] 0 containers: []
	W0503 15:22:51.854892    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:51.854951    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:51.865111    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:51.865129    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:51.865134    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:51.879807    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:51.879820    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:51.891143    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:51.891155    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:51.914807    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:51.914821    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:51.932070    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:51.932079    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:51.949647    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:51.949657    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:51.973150    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:51.973161    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:51.977227    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:51.977237    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:51.991436    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:51.991449    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:52.003222    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:52.003233    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:52.041870    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:52.041880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:52.058920    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:52.058931    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:52.070244    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:52.070257    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:52.084961    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:52.084971    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:52.096721    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:52.096732    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:52.133732    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:52.133740    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:52.151451    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:52.151461    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:54.690886    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:56.223877    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:59.693026    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:59.693209    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:59.711206    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:59.711295    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:59.725873    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:59.725947    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:59.737225    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:59.737291    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:59.747548    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:59.747614    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:59.758154    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:59.758221    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:59.768739    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:59.768803    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:59.779023    9866 logs.go:276] 0 containers: []
	W0503 15:22:59.779034    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:59.779088    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:59.789544    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:59.789572    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:59.789580    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:59.824498    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:59.824511    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:59.838352    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:59.838364    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:59.853534    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:59.853543    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:59.867186    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:59.867197    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:59.878372    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:59.878385    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:59.896657    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:59.896668    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:59.908207    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:59.908217    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:59.944547    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:59.944559    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:59.948802    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:59.948810    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:59.963513    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:59.963525    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:59.985268    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:59.985278    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:59.997048    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:59.997060    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:00.039917    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:00.039932    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:00.054084    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:00.054097    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:00.065277    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:00.065288    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:00.076640    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:00.076652    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:01.226029    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:01.226143    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:01.244730    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:01.244805    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:01.257225    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:01.257293    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:01.268608    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:01.268681    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:01.280089    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:01.280151    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:01.291239    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:01.291309    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:01.301956    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:01.302035    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:01.312690    9665 logs.go:276] 0 containers: []
	W0503 15:23:01.312702    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:01.312750    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:01.324611    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:01.324630    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:01.324635    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:01.339406    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:01.339419    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:01.351460    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:01.351474    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:01.370759    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:01.370772    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:01.382435    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:01.382448    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:01.397700    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:01.397712    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:01.416046    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:01.416056    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:01.439535    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:01.439546    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:01.473052    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:01.473066    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:01.484990    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:01.485001    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:01.496612    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:01.496622    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:01.531660    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:01.531671    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:01.535742    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:01.535751    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:01.549977    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:01.549990    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:01.562525    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:01.562538    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:04.075459    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:02.595834    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:09.078013    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:09.078226    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:09.097264    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:09.097359    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:09.112064    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:09.112139    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:09.124249    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:09.124316    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:09.134533    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:09.134612    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:09.145000    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:09.145071    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:09.160073    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:09.160141    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:09.170780    9665 logs.go:276] 0 containers: []
	W0503 15:23:09.170790    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:09.170845    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:09.181089    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:09.181104    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:09.181109    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:09.193869    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:09.193882    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:09.205519    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:09.205530    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:09.220439    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:09.220451    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:09.238477    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:09.238488    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:09.261631    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:09.261642    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:09.295849    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:09.295861    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:09.332501    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:09.332510    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:09.348730    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:09.348744    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:09.363292    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:09.363305    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:09.374548    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:09.374558    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:07.598098    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:07.598337    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:07.621595    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:07.621696    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:07.637019    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:07.637093    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:07.649667    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:07.649729    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:07.660938    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:07.661004    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:07.672826    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:07.672887    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:07.683360    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:07.683415    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:07.696036    9866 logs.go:276] 0 containers: []
	W0503 15:23:07.696049    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:07.696102    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:07.706406    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:07.706426    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:07.706431    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:07.744311    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:07.744321    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:07.757722    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:07.757732    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:07.771091    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:07.771103    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:07.806777    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:07.806789    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:07.820551    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:07.820562    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:07.832889    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:07.832904    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:07.850239    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:07.850249    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:07.861825    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:07.861835    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:07.900546    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:07.900555    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:07.916227    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:07.916236    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:07.927551    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:07.927563    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:07.943375    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:07.943384    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:07.965076    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:07.965083    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:07.969458    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:07.969467    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:07.983462    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:07.983473    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:07.999893    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:07.999903    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:10.516274    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:09.411717    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:09.411730    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:09.431006    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:09.431020    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:09.445988    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:09.445997    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:09.450697    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:09.450707    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:11.964354    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:15.518809    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:15.519211    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:15.554497    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:15.554628    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:15.578563    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:15.578662    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:15.595108    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:15.595189    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:15.608685    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:15.608764    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:15.619105    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:15.619178    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:15.630386    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:15.630456    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:15.640841    9866 logs.go:276] 0 containers: []
	W0503 15:23:15.640856    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:15.640921    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:15.652081    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:15.652100    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:15.652106    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:15.688801    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:15.688816    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:15.703167    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:15.703178    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:15.717984    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:15.717998    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:15.722597    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:15.722604    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:15.736994    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:15.737007    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:15.775656    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:15.775668    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:15.798400    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:15.798407    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:15.809555    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:15.809567    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:15.821551    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:15.821563    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:15.833240    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:15.833251    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:15.846451    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:15.846461    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:15.885730    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:15.885741    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:15.900621    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:15.900632    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:15.918011    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:15.918021    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:15.929492    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:15.929502    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:15.941982    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:15.941995    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:16.966861    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:16.967098    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:16.990946    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:16.991048    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:17.006262    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:17.006342    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:17.019096    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:17.019165    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:17.032684    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:17.032754    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:17.043494    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:17.043559    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:17.054407    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:17.054473    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:17.065253    9665 logs.go:276] 0 containers: []
	W0503 15:23:17.065266    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:17.065325    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:17.076081    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:17.076098    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:17.076104    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:17.098439    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:17.098449    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:17.112051    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:17.112064    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:17.124076    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:17.124089    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:17.135723    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:17.135736    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:17.147742    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:17.147755    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:17.159301    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:17.159315    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:17.164129    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:17.164139    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:17.199727    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:17.199742    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:17.222895    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:17.222904    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:17.234106    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:17.234115    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:17.247783    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:17.247796    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:17.262014    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:17.262027    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:17.296812    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:17.296824    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:17.314461    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:17.314471    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:18.456008    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:19.827949    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:23.458410    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:23.458524    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:23.469394    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:23.469462    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:23.479969    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:23.480044    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:23.492272    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:23.492345    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:23.502739    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:23.502811    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:23.513451    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:23.513510    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:23.523642    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:23.523712    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:23.533910    9866 logs.go:276] 0 containers: []
	W0503 15:23:23.533921    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:23.533978    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:23.544202    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:23.544221    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:23.544226    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:23.555705    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:23.555715    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:23.568970    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:23.568981    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:23.580199    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:23.580210    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:23.593836    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:23.593848    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:23.608781    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:23.608792    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:23.619762    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:23.619773    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:23.637345    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:23.637358    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:23.654746    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:23.654761    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:23.692486    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:23.692496    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:23.726493    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:23.726504    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:23.764397    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:23.764411    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:23.779117    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:23.779128    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:23.790530    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:23.790541    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:23.794631    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:23.794638    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:23.808281    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:23.808291    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:23.819079    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:23.819090    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:24.830179    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:24.830326    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:24.842909    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:24.842981    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:24.853351    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:24.853423    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:24.863602    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:24.863672    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:24.877884    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:24.877949    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:24.888192    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:24.888260    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:24.898911    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:24.898979    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:24.909162    9665 logs.go:276] 0 containers: []
	W0503 15:23:24.909173    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:24.909227    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:24.919342    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:24.919358    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:24.919363    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:24.931236    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:24.931247    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:24.942851    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:24.942861    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:24.958889    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:24.958900    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:24.995716    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:24.995727    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:25.013394    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:25.013405    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:25.049049    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:25.049070    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:25.061118    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:25.061130    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:25.079538    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:25.079551    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:25.101899    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:25.101912    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:25.119488    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:25.119498    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:25.131690    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:25.131704    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:25.145962    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:25.145974    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:25.158410    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:25.158423    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:25.162909    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:25.162917    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:27.687584    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:26.344339    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:32.690121    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:32.690302    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:32.706315    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:32.706401    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:32.718991    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:32.719061    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:32.729899    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:32.729982    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:32.740745    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:32.740821    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:32.753795    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:32.753874    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:32.764898    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:32.764968    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:32.774946    9665 logs.go:276] 0 containers: []
	W0503 15:23:32.774960    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:32.775017    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:32.784745    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:32.784763    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:32.784768    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:32.789372    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:32.789381    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:32.801316    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:32.801326    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:32.812834    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:32.812847    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:32.830272    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:32.830284    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:32.854681    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:32.854689    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:32.866046    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:32.866059    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:32.877242    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:32.877255    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:32.891262    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:32.891274    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:32.906042    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:32.906050    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:32.917885    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:32.917896    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:32.930496    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:32.930507    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:32.966760    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:32.966768    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:33.001825    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:33.001836    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:33.020466    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:33.020478    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:31.346593    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:31.346738    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:31.361526    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:31.361590    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:31.383161    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:31.383233    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:31.412840    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:31.412906    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:31.423413    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:31.423481    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:31.433904    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:31.433966    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:31.444317    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:31.444386    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:31.453793    9866 logs.go:276] 0 containers: []
	W0503 15:23:31.453803    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:31.453856    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:31.463792    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:31.463814    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:31.463820    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:31.479042    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:31.479053    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:31.491341    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:31.491352    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:31.503535    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:31.503549    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:31.514818    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:31.514834    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:31.526496    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:31.526511    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:31.530535    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:31.530541    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:31.545060    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:31.545070    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:31.556870    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:31.556880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:31.573743    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:31.573753    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:31.587811    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:31.587822    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:31.626584    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:31.626596    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:31.640270    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:31.640280    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:31.662598    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:31.662605    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:31.700455    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:31.700463    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:31.734423    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:31.734435    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:31.746332    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:31.746344    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:34.262392    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:35.540707    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:39.264750    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:39.264883    9866 kubeadm.go:591] duration metric: took 4m4.508080917s to restartPrimaryControlPlane
	W0503 15:23:39.265020    9866 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0503 15:23:39.265082    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0503 15:23:40.341019    9866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.075946834s)
	I0503 15:23:40.341105    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 15:23:40.346104    9866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:23:40.349012    9866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:23:40.351632    9866 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 15:23:40.351639    9866 kubeadm.go:156] found existing configuration files:
	
	I0503 15:23:40.351660    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf
	I0503 15:23:40.353959    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 15:23:40.353981    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:23:40.356590    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf
	I0503 15:23:40.359389    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 15:23:40.359409    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:23:40.361912    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf
	I0503 15:23:40.364973    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 15:23:40.364995    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:23:40.368091    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf
	I0503 15:23:40.370656    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 15:23:40.370679    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:23:40.373505    9866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0503 15:23:40.391867    9866 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0503 15:23:40.391903    9866 kubeadm.go:309] [preflight] Running pre-flight checks
	I0503 15:23:40.443997    9866 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0503 15:23:40.444056    9866 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0503 15:23:40.444112    9866 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0503 15:23:40.493153    9866 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0503 15:23:40.497283    9866 out.go:204]   - Generating certificates and keys ...
	I0503 15:23:40.497378    9866 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0503 15:23:40.497450    9866 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0503 15:23:40.497541    9866 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0503 15:23:40.497581    9866 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0503 15:23:40.497655    9866 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0503 15:23:40.497768    9866 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0503 15:23:40.497881    9866 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0503 15:23:40.497998    9866 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0503 15:23:40.498062    9866 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0503 15:23:40.498123    9866 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0503 15:23:40.498207    9866 kubeadm.go:309] [certs] Using the existing "sa" key
	I0503 15:23:40.498299    9866 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0503 15:23:40.824807    9866 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0503 15:23:40.925458    9866 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0503 15:23:41.036175    9866 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0503 15:23:41.112353    9866 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0503 15:23:41.139434    9866 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0503 15:23:41.140020    9866 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0503 15:23:41.140039    9866 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0503 15:23:41.230797    9866 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0503 15:23:40.542793    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:40.542882    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:40.557745    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:40.557817    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:40.568495    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:40.568566    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:40.579212    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:40.579274    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:40.597205    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:40.597275    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:40.608077    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:40.608146    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:40.619017    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:40.619083    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:40.629516    9665 logs.go:276] 0 containers: []
	W0503 15:23:40.629529    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:40.629585    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:40.644929    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:40.644944    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:40.644949    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:40.659545    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:40.659559    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:40.673544    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:40.673554    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:40.685443    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:40.685459    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:40.697145    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:40.697156    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:40.733817    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:40.733827    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:40.738313    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:40.738321    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:40.759406    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:40.759417    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:40.784498    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:40.784507    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:40.820373    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:40.820385    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:40.841253    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:40.841264    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:40.853140    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:40.853152    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:40.871510    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:40.871524    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:40.885862    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:40.885873    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:40.897915    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:40.897929    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:43.411462    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:41.234569    9866 out.go:204]   - Booting up control plane ...
	I0503 15:23:41.234616    9866 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0503 15:23:41.234659    9866 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0503 15:23:41.234703    9866 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0503 15:23:41.234752    9866 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0503 15:23:41.234826    9866 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0503 15:23:46.239775    9866 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.005833 seconds
	I0503 15:23:46.239897    9866 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0503 15:23:46.248056    9866 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0503 15:23:46.761890    9866 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0503 15:23:46.762047    9866 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-139000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0503 15:23:47.267721    9866 kubeadm.go:309] [bootstrap-token] Using token: rykde1.sku9qwhqyxujsdfz
	I0503 15:23:47.271637    9866 out.go:204]   - Configuring RBAC rules ...
	I0503 15:23:47.271713    9866 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0503 15:23:47.271770    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0503 15:23:47.278084    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0503 15:23:47.279396    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0503 15:23:47.280355    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0503 15:23:47.281857    9866 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0503 15:23:47.286303    9866 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0503 15:23:47.448730    9866 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0503 15:23:47.673517    9866 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0503 15:23:47.673970    9866 kubeadm.go:309] 
	I0503 15:23:47.673999    9866 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0503 15:23:47.674007    9866 kubeadm.go:309] 
	I0503 15:23:47.674045    9866 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0503 15:23:47.674051    9866 kubeadm.go:309] 
	I0503 15:23:47.674068    9866 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0503 15:23:47.674097    9866 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0503 15:23:47.674129    9866 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0503 15:23:47.674134    9866 kubeadm.go:309] 
	I0503 15:23:47.674157    9866 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0503 15:23:47.674164    9866 kubeadm.go:309] 
	I0503 15:23:47.674183    9866 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0503 15:23:47.674185    9866 kubeadm.go:309] 
	I0503 15:23:47.674209    9866 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0503 15:23:47.674245    9866 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0503 15:23:47.674291    9866 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0503 15:23:47.674297    9866 kubeadm.go:309] 
	I0503 15:23:47.674335    9866 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0503 15:23:47.674373    9866 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0503 15:23:47.674377    9866 kubeadm.go:309] 
	I0503 15:23:47.674424    9866 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rykde1.sku9qwhqyxujsdfz \
	I0503 15:23:47.674470    9866 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 \
	I0503 15:23:47.674482    9866 kubeadm.go:309] 	--control-plane 
	I0503 15:23:47.674485    9866 kubeadm.go:309] 
	I0503 15:23:47.674525    9866 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0503 15:23:47.674527    9866 kubeadm.go:309] 
	I0503 15:23:47.674576    9866 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rykde1.sku9qwhqyxujsdfz \
	I0503 15:23:47.674620    9866 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 
	I0503 15:23:47.674766    9866 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0503 15:23:47.674846    9866 cni.go:84] Creating CNI manager for ""
	I0503 15:23:47.674856    9866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:23:47.677538    9866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0503 15:23:47.680532    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0503 15:23:47.685257    9866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0503 15:23:47.690369    9866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0503 15:23:47.690422    9866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-139000 minikube.k8s.io/updated_at=2024_05_03T15_23_47_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a minikube.k8s.io/name=stopped-upgrade-139000 minikube.k8s.io/primary=true
	I0503 15:23:47.690423    9866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 15:23:47.723070    9866 kubeadm.go:1107] duration metric: took 32.691625ms to wait for elevateKubeSystemPrivileges
	I0503 15:23:47.732188    9866 ops.go:34] apiserver oom_adj: -16
	W0503 15:23:47.732213    9866 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0503 15:23:47.732219    9866 kubeadm.go:393] duration metric: took 4m12.989554s to StartCluster
	I0503 15:23:47.732229    9866 settings.go:142] acquiring lock: {Name:mkee9fdcf0e1a69d3ca7e09bf6e6cf0362ae7cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:23:47.732320    9866 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:23:47.732758    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:23:47.732972    9866 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:23:47.737594    9866 out.go:177] * Verifying Kubernetes components...
	I0503 15:23:47.732981    9866 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0503 15:23:47.733057    9866 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:23:47.745474    9866 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-139000"
	I0503 15:23:47.745489    9866 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-139000"
	W0503 15:23:47.745495    9866 addons.go:243] addon storage-provisioner should already be in state true
	I0503 15:23:47.745512    9866 host.go:66] Checking if "stopped-upgrade-139000" exists ...
	I0503 15:23:47.745518    9866 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-139000"
	I0503 15:23:47.745540    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:23:47.745577    9866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-139000"
	I0503 15:23:47.747002    9866 kapi.go:59] client config for stopped-upgrade-139000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f8fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:23:47.747188    9866 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-139000"
	W0503 15:23:47.747194    9866 addons.go:243] addon default-storageclass should already be in state true
	I0503 15:23:47.747203    9866 host.go:66] Checking if "stopped-upgrade-139000" exists ...
	I0503 15:23:47.749392    9866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:23:48.413636    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:48.413771    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:48.426014    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:48.426085    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:48.437453    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:48.437528    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:48.447700    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:48.447766    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:48.458673    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:48.458745    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:48.469586    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:48.469650    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:48.480643    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:48.480714    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:48.490799    9665 logs.go:276] 0 containers: []
	W0503 15:23:48.490810    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:48.490862    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:48.500810    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:48.500828    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:48.500834    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:48.518781    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:48.518791    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:48.530374    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:48.530385    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:48.541564    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:48.541575    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:48.565287    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:48.565295    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:48.603152    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:48.603170    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:48.638453    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:48.638468    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:48.650823    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:48.650834    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:48.664890    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:48.664901    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:48.683054    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:48.683068    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:48.696497    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:48.696508    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:48.708318    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:48.708330    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:48.719772    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:48.719784    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:48.732773    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:48.732785    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:48.737604    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:48.737613    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:47.753483    9866 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:23:47.753492    9866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0503 15:23:47.753501    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:23:47.754202    9866 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0503 15:23:47.754208    9866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0503 15:23:47.754212    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:23:47.840141    9866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:23:47.844858    9866 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:23:47.844897    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:23:47.849526    9866 api_server.go:72] duration metric: took 116.5455ms to wait for apiserver process to appear ...
	I0503 15:23:47.849538    9866 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:23:47.849548    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:47.865556    9866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0503 15:23:47.867165    9866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:23:51.255858    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:52.851605    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:52.851658    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:56.258065    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:56.258240    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:56.294224    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:23:56.294310    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:56.306545    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:23:56.306614    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:56.316830    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:23:56.316895    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:56.327344    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:23:56.327409    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:56.337892    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:23:56.337959    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:56.348021    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:23:56.348087    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:56.358082    9665 logs.go:276] 0 containers: []
	W0503 15:23:56.358092    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:56.358146    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:56.368425    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:23:56.368442    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:56.368447    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:56.404003    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:23:56.404013    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:23:56.415838    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:56.415850    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:56.450915    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:23:56.450922    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:23:56.461765    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:23:56.461777    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:23:56.476368    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:23:56.476377    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:23:56.490633    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:23:56.490643    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:23:56.502272    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:23:56.502280    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:23:56.528204    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:56.528214    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:56.532796    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:23:56.532805    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:23:56.547040    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:23:56.547049    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:23:56.558498    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:23:56.558508    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:23:56.570056    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:23:56.570065    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:23:56.582059    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:56.582068    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:56.606021    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:23:56.606027    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:59.119649    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:57.851918    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:57.851949    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:04.121755    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:04.121959    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:24:04.140375    9665 logs.go:276] 1 containers: [97b2e83dc539]
	I0503 15:24:04.140473    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:24:04.154368    9665 logs.go:276] 1 containers: [12854b004aa2]
	I0503 15:24:04.154445    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:24:04.166785    9665 logs.go:276] 4 containers: [6b26fe9dd44c bcfa459d9a7e c10faa2971c4 d411f2d0da0d]
	I0503 15:24:04.166865    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:24:04.177317    9665 logs.go:276] 1 containers: [ca00cca503fe]
	I0503 15:24:04.177385    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:24:04.188179    9665 logs.go:276] 1 containers: [bbac10efff1c]
	I0503 15:24:04.188241    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:24:04.199117    9665 logs.go:276] 1 containers: [6e6e24f8e828]
	I0503 15:24:04.199184    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:24:04.213431    9665 logs.go:276] 0 containers: []
	W0503 15:24:04.213441    9665 logs.go:278] No container was found matching "kindnet"
	I0503 15:24:04.213497    9665 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:24:04.223916    9665 logs.go:276] 1 containers: [45a535211359]
	I0503 15:24:04.223932    9665 logs.go:123] Gathering logs for kube-proxy [bbac10efff1c] ...
	I0503 15:24:04.223938    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbac10efff1c"
	I0503 15:24:04.235737    9665 logs.go:123] Gathering logs for kube-controller-manager [6e6e24f8e828] ...
	I0503 15:24:04.235748    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e6e24f8e828"
	I0503 15:24:04.253470    9665 logs.go:123] Gathering logs for Docker ...
	I0503 15:24:04.253481    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:24:04.277647    9665 logs.go:123] Gathering logs for storage-provisioner [45a535211359] ...
	I0503 15:24:04.277660    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45a535211359"
	I0503 15:24:04.289847    9665 logs.go:123] Gathering logs for dmesg ...
	I0503 15:24:04.289859    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:24:04.294181    9665 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:24:04.294189    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:24:04.328501    9665 logs.go:123] Gathering logs for kube-apiserver [97b2e83dc539] ...
	I0503 15:24:04.328515    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b2e83dc539"
	I0503 15:24:04.342816    9665 logs.go:123] Gathering logs for etcd [12854b004aa2] ...
	I0503 15:24:04.342829    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12854b004aa2"
	I0503 15:24:04.357025    9665 logs.go:123] Gathering logs for coredns [6b26fe9dd44c] ...
	I0503 15:24:04.357038    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b26fe9dd44c"
	I0503 15:24:04.369013    9665 logs.go:123] Gathering logs for coredns [bcfa459d9a7e] ...
	I0503 15:24:04.369024    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcfa459d9a7e"
	I0503 15:24:04.380576    9665 logs.go:123] Gathering logs for kube-scheduler [ca00cca503fe] ...
	I0503 15:24:04.380587    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca00cca503fe"
	I0503 15:24:02.852195    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:02.852217    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:04.395675    9665 logs.go:123] Gathering logs for container status ...
	I0503 15:24:04.395686    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:24:04.411854    9665 logs.go:123] Gathering logs for coredns [d411f2d0da0d] ...
	I0503 15:24:04.411866    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d411f2d0da0d"
	I0503 15:24:04.424171    9665 logs.go:123] Gathering logs for kubelet ...
	I0503 15:24:04.424182    9665 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:24:04.461682    9665 logs.go:123] Gathering logs for coredns [c10faa2971c4] ...
	I0503 15:24:04.461693    9665 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10faa2971c4"
	I0503 15:24:06.975735    9665 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:07.852533    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:07.852555    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:11.977822    9665 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:11.982156    9665 out.go:177] 
	W0503 15:24:11.986102    9665 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0503 15:24:11.986111    9665 out.go:239] * 
	W0503 15:24:11.986655    9665 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:24:11.998118    9665 out.go:177] 
	I0503 15:24:12.853014    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:12.853033    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:17.853613    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:17.853636    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0503 15:24:18.214936    9866 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0503 15:24:18.219525    9866 out.go:177] * Enabled addons: storage-provisioner
	I0503 15:24:18.225241    9866 addons.go:505] duration metric: took 30.492949916s for enable addons: enabled=[storage-provisioner]
	I0503 15:24:22.854835    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:22.854861    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-05-03 22:15:22 UTC, ends at Fri 2024-05-03 22:24:28 UTC. --
	May 03 22:24:13 running-upgrade-916000 dockerd[3216]: time="2024-05-03T22:24:13.616486318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 03 22:24:13 running-upgrade-916000 dockerd[3216]: time="2024-05-03T22:24:13.616516649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 03 22:24:13 running-upgrade-916000 dockerd[3216]: time="2024-05-03T22:24:13.616531982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 03 22:24:13 running-upgrade-916000 dockerd[3216]: time="2024-05-03T22:24:13.616578353Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d7e79ea240ac6ee41b997ca9e3f6a2d71e14fff1724061a350fc9a3adf538c9c pid=18166 runtime=io.containerd.runc.v2
	May 03 22:24:13 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:13Z" level=error msg="ContainerStats resp: {0x400009d880 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x400066d440 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x400066d9c0 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x400066db00 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x4000828900 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x4000828d00 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x4000829140 linux}"
	May 03 22:24:14 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:14Z" level=error msg="ContainerStats resp: {0x4000829780 linux}"
	May 03 22:24:19 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 03 22:24:24 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	May 03 22:24:24 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:24Z" level=error msg="ContainerStats resp: {0x40008bc040 linux}"
	May 03 22:24:24 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:24Z" level=error msg="ContainerStats resp: {0x40007c1f00 linux}"
	May 03 22:24:25 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:25Z" level=error msg="ContainerStats resp: {0x40008bd680 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x400095d580 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x400095d740 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x400095dc80 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x400066d380 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x4000996540 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x4000996d00 linux}"
	May 03 22:24:26 running-upgrade-916000 cri-dockerd[3061]: time="2024-05-03T22:24:26Z" level=error msg="ContainerStats resp: {0x40009975c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d7e79ea240ac6       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   5e1327b8250d0
	9a1792a06160b       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   b9c1588a8e00a
	6b26fe9dd44c3       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5e1327b8250d0
	bcfa459d9a7e8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b9c1588a8e00a
	bbac10efff1ca       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   05060184fab6e
	45a535211359d       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   42e272bfc0132
	ca00cca503fec       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   f2528893041f0
	12854b004aa29       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   7859fcffb1157
	97b2e83dc5396       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   379dd60f7c202
	6e6e24f8e8288       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   d2e80acf1ce92
	
	
	==> coredns [6b26fe9dd44c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:38244->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:51665->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:50507->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:45845->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:48743->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:55587->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:52288->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:44445->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:60632->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3502383132869949011.8586139576570523534. HINFO: read udp 10.244.0.3:42292->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9a1792a06160] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4130212952642563770.8630826854856624645. HINFO: read udp 10.244.0.2:40828->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4130212952642563770.8630826854856624645. HINFO: read udp 10.244.0.2:52696->10.0.2.3:53: i/o timeout
	
	
	==> coredns [bcfa459d9a7e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2833941806212306363.7120330084193846450. HINFO: read udp 10.244.0.2:47916->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2833941806212306363.7120330084193846450. HINFO: read udp 10.244.0.2:43842->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2833941806212306363.7120330084193846450. HINFO: read udp 10.244.0.2:34321->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2833941806212306363.7120330084193846450. HINFO: read udp 10.244.0.2:48497->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2833941806212306363.7120330084193846450. HINFO: read udp 10.244.0.2:57728->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2833941806212306363.7120330084193846450. HINFO: read udp 10.244.0.2:51170->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d7e79ea240ac] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8551355505729050003.4359640516830002674. HINFO: read udp 10.244.0.3:44116->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551355505729050003.4359640516830002674. HINFO: read udp 10.244.0.3:52068->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-916000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-916000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a
	                    minikube.k8s.io/name=running-upgrade-916000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_03T15_20_11_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 May 2024 22:20:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-916000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 May 2024 22:24:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 May 2024 22:20:11 +0000   Fri, 03 May 2024 22:20:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 May 2024 22:20:11 +0000   Fri, 03 May 2024 22:20:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 May 2024 22:20:11 +0000   Fri, 03 May 2024 22:20:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 May 2024 22:20:11 +0000   Fri, 03 May 2024 22:20:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-916000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 0db878804ce64c9ab31c6dd7ef2a0013
	  System UUID:                0db878804ce64c9ab31c6dd7ef2a0013
	  Boot ID:                    97ae5bb4-0e17-4177-97e6-5ec72f995176
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-pv48s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-qfd8q                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-916000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-916000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-916000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-ccgqj                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-916000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-916000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-916000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-916000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-916000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-916000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-916000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-916000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-916000 event: Registered Node running-upgrade-916000 in Controller
	
	
	==> dmesg <==
	[  +1.427361] systemd-fstab-generator[872]: Ignoring "noauto" for root device
	[  +0.078487] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.071613] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +1.139034] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.084364] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.080432] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.533270] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.152815] systemd-fstab-generator[1935]: Ignoring "noauto" for root device
	[  +2.365064] systemd-fstab-generator[2198]: Ignoring "noauto" for root device
	[  +0.317514] systemd-fstab-generator[2239]: Ignoring "noauto" for root device
	[  +0.095887] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +0.094758] systemd-fstab-generator[2263]: Ignoring "noauto" for root device
	[  +2.621805] kauditd_printk_skb: 39 callbacks suppressed
	[  +0.222437] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +0.078301] systemd-fstab-generator[3029]: Ignoring "noauto" for root device
	[  +0.077473] systemd-fstab-generator[3040]: Ignoring "noauto" for root device
	[  +0.099357] systemd-fstab-generator[3054]: Ignoring "noauto" for root device
	[  +2.058489] systemd-fstab-generator[3203]: Ignoring "noauto" for root device
	[  +3.745646] systemd-fstab-generator[3582]: Ignoring "noauto" for root device
	[  +1.159841] systemd-fstab-generator[3840]: Ignoring "noauto" for root device
	[May 3 22:16] kauditd_printk_skb: 68 callbacks suppressed
	[May 3 22:20] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.361144] systemd-fstab-generator[11155]: Ignoring "noauto" for root device
	[  +5.665848] systemd-fstab-generator[11767]: Ignoring "noauto" for root device
	[  +0.439540] systemd-fstab-generator[11900]: Ignoring "noauto" for root device
	
	
	==> etcd [12854b004aa2] <==
	{"level":"info","ts":"2024-05-03T22:20:06.562Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-03T22:20:06.562Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-03T22:20:06.562Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-05-03T22:20:06.562Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-03T22:20:06.563Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-05-03T22:20:06.563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-05-03T22:20:06.563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-03T22:20:07.044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-03T22:20:07.045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-03T22:20:07.045Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-03T22:20:07.045Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-916000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-03T22:20:07.045Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-03T22:20:07.045Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-03T22:20:07.045Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-03T22:20:07.046Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-05-03T22:20:07.049Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-03T22:20:07.050Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 22:24:28 up 9 min,  0 users,  load average: 0.10, 0.22, 0.14
	Linux running-upgrade-916000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [97b2e83dc539] <==
	I0503 22:20:08.368797       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0503 22:20:08.368808       1 cache.go:39] Caches are synced for autoregister controller
	I0503 22:20:08.369389       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0503 22:20:08.374280       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0503 22:20:08.374310       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0503 22:20:08.377294       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0503 22:20:08.383802       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0503 22:20:09.106269       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0503 22:20:09.269580       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0503 22:20:09.271367       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0503 22:20:09.271381       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0503 22:20:09.394746       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0503 22:20:09.404981       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0503 22:20:09.438715       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0503 22:20:09.441806       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0503 22:20:09.442117       1 controller.go:611] quota admission added evaluator for: endpoints
	I0503 22:20:09.443265       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0503 22:20:10.402936       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0503 22:20:11.029543       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0503 22:20:11.032536       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0503 22:20:11.039138       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0503 22:20:11.085779       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0503 22:20:25.210417       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0503 22:20:25.360001       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0503 22:20:26.047137       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6e6e24f8e828] <==
	I0503 22:20:24.461463       1 shared_informer.go:262] Caches are synced for deployment
	I0503 22:20:24.463153       1 shared_informer.go:262] Caches are synced for namespace
	I0503 22:20:24.559274       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0503 22:20:24.559280       1 shared_informer.go:262] Caches are synced for endpoint
	I0503 22:20:24.604177       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0503 22:20:24.609981       1 shared_informer.go:262] Caches are synced for PV protection
	I0503 22:20:24.611463       1 shared_informer.go:262] Caches are synced for persistent volume
	I0503 22:20:24.614206       1 shared_informer.go:262] Caches are synced for resource quota
	I0503 22:20:24.633122       1 shared_informer.go:262] Caches are synced for taint
	I0503 22:20:24.633231       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0503 22:20:24.633277       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0503 22:20:24.633283       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-916000. Assuming now as a timestamp.
	I0503 22:20:24.633365       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0503 22:20:24.633378       1 event.go:294] "Event occurred" object="running-upgrade-916000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-916000 event: Registered Node running-upgrade-916000 in Controller"
	I0503 22:20:24.647068       1 shared_informer.go:262] Caches are synced for attach detach
	I0503 22:20:24.657297       1 shared_informer.go:262] Caches are synced for expand
	I0503 22:20:24.671266       1 shared_informer.go:262] Caches are synced for resource quota
	I0503 22:20:24.709253       1 shared_informer.go:262] Caches are synced for daemon sets
	I0503 22:20:25.073045       1 shared_informer.go:262] Caches are synced for garbage collector
	I0503 22:20:25.141393       1 shared_informer.go:262] Caches are synced for garbage collector
	I0503 22:20:25.141404       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0503 22:20:25.211529       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0503 22:20:25.362767       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ccgqj"
	I0503 22:20:25.462186       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-qfd8q"
	I0503 22:20:25.465575       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pv48s"
	
	
	==> kube-proxy [bbac10efff1c] <==
	I0503 22:20:25.941805       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0503 22:20:25.942124       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0503 22:20:25.942143       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0503 22:20:26.040354       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0503 22:20:26.040433       1 server_others.go:206] "Using iptables Proxier"
	I0503 22:20:26.042353       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0503 22:20:26.042498       1 server.go:661] "Version info" version="v1.24.1"
	I0503 22:20:26.042503       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0503 22:20:26.043373       1 config.go:317] "Starting service config controller"
	I0503 22:20:26.043731       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0503 22:20:26.043747       1 config.go:444] "Starting node config controller"
	I0503 22:20:26.043750       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0503 22:20:26.045440       1 config.go:226] "Starting endpoint slice config controller"
	I0503 22:20:26.045445       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0503 22:20:26.146058       1 shared_informer.go:262] Caches are synced for node config
	I0503 22:20:26.146074       1 shared_informer.go:262] Caches are synced for service config
	I0503 22:20:26.147186       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ca00cca503fe] <==
	W0503 22:20:08.341413       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0503 22:20:08.341422       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0503 22:20:08.341455       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0503 22:20:08.341463       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0503 22:20:08.341483       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0503 22:20:08.341490       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0503 22:20:08.341511       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0503 22:20:08.341515       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0503 22:20:08.341545       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0503 22:20:08.341557       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0503 22:20:08.341577       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0503 22:20:08.341583       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0503 22:20:08.341621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0503 22:20:08.341628       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0503 22:20:08.341861       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0503 22:20:08.341870       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0503 22:20:08.341889       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0503 22:20:08.341895       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0503 22:20:09.171294       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0503 22:20:09.171331       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0503 22:20:09.263419       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0503 22:20:09.263469       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0503 22:20:09.349461       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0503 22:20:09.349479       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0503 22:20:09.839573       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-05-03 22:15:22 UTC, ends at Fri 2024-05-03 22:24:28 UTC. --
	May 03 22:20:13 running-upgrade-916000 kubelet[11773]: I0503 22:20:13.110265   11773 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/9a2aff39-6718-4e30-90d8-38a2faaeaa06/volumes"
	May 03 22:20:13 running-upgrade-916000 kubelet[11773]: I0503 22:20:13.110318   11773 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/9e9e992f-775d-4e80-9173-081759f0aceb/volumes"
	May 03 22:20:13 running-upgrade-916000 kubelet[11773]: I0503 22:20:13.258848   11773 request.go:601] Waited for 1.127050465s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	May 03 22:20:13 running-upgrade-916000 kubelet[11773]: E0503 22:20:13.265603   11773 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-916000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-916000"
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: I0503 22:20:24.508646   11773 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: I0503 22:20:24.509191   11773 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: I0503 22:20:24.637811   11773 topology_manager.go:200] "Topology Admit Handler"
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: I0503 22:20:24.710285   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnvpm\" (UniqueName: \"kubernetes.io/projected/9ee789ab-3031-4d1e-8ffd-6e9c8aab4241-kube-api-access-fnvpm\") pod \"storage-provisioner\" (UID: \"9ee789ab-3031-4d1e-8ffd-6e9c8aab4241\") " pod="kube-system/storage-provisioner"
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: I0503 22:20:24.710333   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9ee789ab-3031-4d1e-8ffd-6e9c8aab4241-tmp\") pod \"storage-provisioner\" (UID: \"9ee789ab-3031-4d1e-8ffd-6e9c8aab4241\") " pod="kube-system/storage-provisioner"
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: E0503 22:20:24.815375   11773 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: E0503 22:20:24.815403   11773 projected.go:192] Error preparing data for projected volume kube-api-access-fnvpm for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	May 03 22:20:24 running-upgrade-916000 kubelet[11773]: E0503 22:20:24.815447   11773 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/9ee789ab-3031-4d1e-8ffd-6e9c8aab4241-kube-api-access-fnvpm podName:9ee789ab-3031-4d1e-8ffd-6e9c8aab4241 nodeName:}" failed. No retries permitted until 2024-05-03 22:20:25.3154336 +0000 UTC m=+14.294983915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fnvpm" (UniqueName: "kubernetes.io/projected/9ee789ab-3031-4d1e-8ffd-6e9c8aab4241-kube-api-access-fnvpm") pod "storage-provisioner" (UID: "9ee789ab-3031-4d1e-8ffd-6e9c8aab4241") : configmap "kube-root-ca.crt" not found
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.366500   11773 topology_manager.go:200] "Topology Admit Handler"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.465291   11773 topology_manager.go:200] "Topology Admit Handler"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.468086   11773 topology_manager.go:200] "Topology Admit Handler"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.515710   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9d8ba23-e256-4aa7-973c-2f51ebe6605c-lib-modules\") pod \"kube-proxy-ccgqj\" (UID: \"c9d8ba23-e256-4aa7-973c-2f51ebe6605c\") " pod="kube-system/kube-proxy-ccgqj"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.515747   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9d8ba23-e256-4aa7-973c-2f51ebe6605c-xtables-lock\") pod \"kube-proxy-ccgqj\" (UID: \"c9d8ba23-e256-4aa7-973c-2f51ebe6605c\") " pod="kube-system/kube-proxy-ccgqj"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.515758   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pc6f\" (UniqueName: \"kubernetes.io/projected/c9d8ba23-e256-4aa7-973c-2f51ebe6605c-kube-api-access-8pc6f\") pod \"kube-proxy-ccgqj\" (UID: \"c9d8ba23-e256-4aa7-973c-2f51ebe6605c\") " pod="kube-system/kube-proxy-ccgqj"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.515769   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9d8ba23-e256-4aa7-973c-2f51ebe6605c-kube-proxy\") pod \"kube-proxy-ccgqj\" (UID: \"c9d8ba23-e256-4aa7-973c-2f51ebe6605c\") " pod="kube-system/kube-proxy-ccgqj"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.616017   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7627b5f-9b5e-45ce-b4a3-234bd07f6fe7-config-volume\") pod \"coredns-6d4b75cb6d-qfd8q\" (UID: \"b7627b5f-9b5e-45ce-b4a3-234bd07f6fe7\") " pod="kube-system/coredns-6d4b75cb6d-qfd8q"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.616048   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwprh\" (UniqueName: \"kubernetes.io/projected/566267a7-c32f-4597-9103-4d3a2fd4a5b2-kube-api-access-pwprh\") pod \"coredns-6d4b75cb6d-pv48s\" (UID: \"566267a7-c32f-4597-9103-4d3a2fd4a5b2\") " pod="kube-system/coredns-6d4b75cb6d-pv48s"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.616064   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/566267a7-c32f-4597-9103-4d3a2fd4a5b2-config-volume\") pod \"coredns-6d4b75cb6d-pv48s\" (UID: \"566267a7-c32f-4597-9103-4d3a2fd4a5b2\") " pod="kube-system/coredns-6d4b75cb6d-pv48s"
	May 03 22:20:25 running-upgrade-916000 kubelet[11773]: I0503 22:20:25.616080   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xng7q\" (UniqueName: \"kubernetes.io/projected/b7627b5f-9b5e-45ce-b4a3-234bd07f6fe7-kube-api-access-xng7q\") pod \"coredns-6d4b75cb6d-qfd8q\" (UID: \"b7627b5f-9b5e-45ce-b4a3-234bd07f6fe7\") " pod="kube-system/coredns-6d4b75cb6d-qfd8q"
	May 03 22:24:13 running-upgrade-916000 kubelet[11773]: I0503 22:24:13.797780   11773 scope.go:110] "RemoveContainer" containerID="c10faa2971c4a1ee46ba7fa8f6e91fd87d684d9ccec5ad923a752df495387d64"
	May 03 22:24:13 running-upgrade-916000 kubelet[11773]: I0503 22:24:13.811667   11773 scope.go:110] "RemoveContainer" containerID="d411f2d0da0d2bdb3e0b6b6d7859eec62cb5b7f8b72baea48e4c49cfc69ec54c"
	
	
	==> storage-provisioner [45a535211359] <==
	I0503 22:20:25.807141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0503 22:20:25.814737       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0503 22:20:25.814752       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0503 22:20:25.818945       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0503 22:20:25.819556       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b0ae0917-ac1b-44ae-aae6-f461ec218e8d", APIVersion:"v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-916000_d7888413-d76a-448e-bff1-c78c62595073 became leader
	I0503 22:20:25.819578       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-916000_d7888413-d76a-448e-bff1-c78c62595073!
	I0503 22:20:25.919997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-916000_d7888413-d76a-448e-bff1-c78c62595073!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-916000 -n running-upgrade-916000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-916000 -n running-upgrade-916000: exit status 2 (15.646770792s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-916000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-916000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-916000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-916000: (1.131737792s)
--- FAIL: TestRunningBinaryUpgrade (592.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-999000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-999000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.814385291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-999000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-999000" primary control-plane node in "kubernetes-upgrade-999000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-999000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:17:52.580263    9744 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:17:52.580386    9744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:17:52.580389    9744 out.go:304] Setting ErrFile to fd 2...
	I0503 15:17:52.580391    9744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:17:52.580510    9744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:17:52.581599    9744 out.go:298] Setting JSON to false
	I0503 15:17:52.597852    9744 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4643,"bootTime":1714770029,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:17:52.597925    9744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:17:52.603887    9744 out.go:177] * [kubernetes-upgrade-999000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:17:52.611924    9744 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:17:52.611985    9744 notify.go:220] Checking for updates...
	I0503 15:17:52.617877    9744 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:17:52.620907    9744 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:17:52.623942    9744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:17:52.626844    9744 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:17:52.629901    9744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:17:52.633109    9744 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:17:52.633180    9744 config.go:182] Loaded profile config "running-upgrade-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:17:52.633220    9744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:17:52.636861    9744 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:17:52.643857    9744 start.go:297] selected driver: qemu2
	I0503 15:17:52.643863    9744 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:17:52.643869    9744 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:17:52.646025    9744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:17:52.648889    9744 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:17:52.651986    9744 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 15:17:52.652035    9744 cni.go:84] Creating CNI manager for ""
	I0503 15:17:52.652042    9744 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0503 15:17:52.652085    9744 start.go:340] cluster config:
	{Name:kubernetes-upgrade-999000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:17:52.656249    9744 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:17:52.662888    9744 out.go:177] * Starting "kubernetes-upgrade-999000" primary control-plane node in "kubernetes-upgrade-999000" cluster
	I0503 15:17:52.666606    9744 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:17:52.666618    9744 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:17:52.666625    9744 cache.go:56] Caching tarball of preloaded images
	I0503 15:17:52.666673    9744 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:17:52.666679    9744 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0503 15:17:52.666728    9744 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kubernetes-upgrade-999000/config.json ...
	I0503 15:17:52.666737    9744 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kubernetes-upgrade-999000/config.json: {Name:mk3698da6ef42379a8d97b835f19d0479e78b640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:17:52.666969    9744 start.go:360] acquireMachinesLock for kubernetes-upgrade-999000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:17:52.667007    9744 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "kubernetes-upgrade-999000"
	I0503 15:17:52.667019    9744 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-999000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:17:52.667049    9744 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:17:52.674674    9744 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:17:52.699998    9744 start.go:159] libmachine.API.Create for "kubernetes-upgrade-999000" (driver="qemu2")
	I0503 15:17:52.700023    9744 client.go:168] LocalClient.Create starting
	I0503 15:17:52.700091    9744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:17:52.700122    9744 main.go:141] libmachine: Decoding PEM data...
	I0503 15:17:52.700132    9744 main.go:141] libmachine: Parsing certificate...
	I0503 15:17:52.700170    9744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:17:52.700195    9744 main.go:141] libmachine: Decoding PEM data...
	I0503 15:17:52.700201    9744 main.go:141] libmachine: Parsing certificate...
	I0503 15:17:52.700538    9744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:17:52.873045    9744 main.go:141] libmachine: Creating SSH key...
	I0503 15:17:52.934697    9744 main.go:141] libmachine: Creating Disk image...
	I0503 15:17:52.934703    9744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:17:52.934860    9744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:17:52.948189    9744 main.go:141] libmachine: STDOUT: 
	I0503 15:17:52.948208    9744 main.go:141] libmachine: STDERR: 
	I0503 15:17:52.948265    9744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2 +20000M
	I0503 15:17:52.959613    9744 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:17:52.959635    9744 main.go:141] libmachine: STDERR: 
	I0503 15:17:52.959652    9744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:17:52.959659    9744 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:17:52.959692    9744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d4:c8:9d:4c:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:17:52.961445    9744 main.go:141] libmachine: STDOUT: 
	I0503 15:17:52.961462    9744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:17:52.961481    9744 client.go:171] duration metric: took 261.458541ms to LocalClient.Create
	I0503 15:17:54.963653    9744 start.go:128] duration metric: took 2.296626083s to createHost
	I0503 15:17:54.963738    9744 start.go:83] releasing machines lock for "kubernetes-upgrade-999000", held for 2.296773958s
	W0503 15:17:54.963819    9744 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:17:54.971246    9744 out.go:177] * Deleting "kubernetes-upgrade-999000" in qemu2 ...
	W0503 15:17:54.997970    9744 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:17:54.997998    9744 start.go:728] Will try again in 5 seconds ...
	I0503 15:17:59.998092    9744 start.go:360] acquireMachinesLock for kubernetes-upgrade-999000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:17:59.998608    9744 start.go:364] duration metric: took 399.042µs to acquireMachinesLock for "kubernetes-upgrade-999000"
	I0503 15:17:59.998705    9744 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-999000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:17:59.998997    9744 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:18:00.008668    9744 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:18:00.051075    9744 start.go:159] libmachine.API.Create for "kubernetes-upgrade-999000" (driver="qemu2")
	I0503 15:18:00.051165    9744 client.go:168] LocalClient.Create starting
	I0503 15:18:00.051360    9744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:18:00.051449    9744 main.go:141] libmachine: Decoding PEM data...
	I0503 15:18:00.051467    9744 main.go:141] libmachine: Parsing certificate...
	I0503 15:18:00.051556    9744 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:18:00.051613    9744 main.go:141] libmachine: Decoding PEM data...
	I0503 15:18:00.051624    9744 main.go:141] libmachine: Parsing certificate...
	I0503 15:18:00.052212    9744 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:18:00.209425    9744 main.go:141] libmachine: Creating SSH key...
	I0503 15:18:00.294493    9744 main.go:141] libmachine: Creating Disk image...
	I0503 15:18:00.294501    9744 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:18:00.294692    9744 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:18:00.307584    9744 main.go:141] libmachine: STDOUT: 
	I0503 15:18:00.307607    9744 main.go:141] libmachine: STDERR: 
	I0503 15:18:00.307661    9744 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2 +20000M
	I0503 15:18:00.318529    9744 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:18:00.318545    9744 main.go:141] libmachine: STDERR: 
	I0503 15:18:00.318561    9744 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:18:00.318565    9744 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:18:00.318593    9744 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:ed:0f:7a:f8:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:18:00.320272    9744 main.go:141] libmachine: STDOUT: 
	I0503 15:18:00.320290    9744 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:18:00.320304    9744 client.go:171] duration metric: took 269.139167ms to LocalClient.Create
	I0503 15:18:02.322435    9744 start.go:128] duration metric: took 2.3234565s to createHost
	I0503 15:18:02.322508    9744 start.go:83] releasing machines lock for "kubernetes-upgrade-999000", held for 2.323911417s
	W0503 15:18:02.322850    9744 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:18:02.330798    9744 out.go:177] 
	W0503 15:18:02.337026    9744 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:18:02.337047    9744 out.go:239] * 
	* 
	W0503 15:18:02.339249    9744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:18:02.349031    9744 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-999000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-999000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-999000: (3.551397667s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-999000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-999000 status --format={{.Host}}: exit status 7 (64.65975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-999000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-999000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.18264725s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-999000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-999000" primary control-plane node in "kubernetes-upgrade-999000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:18:06.014902    9787 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:18:06.015069    9787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:18:06.015072    9787 out.go:304] Setting ErrFile to fd 2...
	I0503 15:18:06.015075    9787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:18:06.015214    9787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:18:06.016234    9787 out.go:298] Setting JSON to false
	I0503 15:18:06.032693    9787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4657,"bootTime":1714770029,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:18:06.032754    9787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:18:06.038258    9787 out.go:177] * [kubernetes-upgrade-999000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:18:06.046243    9787 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:18:06.050225    9787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:18:06.046309    9787 notify.go:220] Checking for updates...
	I0503 15:18:06.053181    9787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:18:06.056270    9787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:18:06.057581    9787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:18:06.060206    9787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:18:06.063514    9787 config.go:182] Loaded profile config "kubernetes-upgrade-999000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0503 15:18:06.063766    9787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:18:06.068075    9787 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:18:06.075278    9787 start.go:297] selected driver: qemu2
	I0503 15:18:06.075284    9787 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-999000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-999000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:18:06.075354    9787 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:18:06.077543    9787 cni.go:84] Creating CNI manager for ""
	I0503 15:18:06.077561    9787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:18:06.077581    9787 start.go:340] cluster config:
	{Name:kubernetes-upgrade-999000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-999000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:18:06.081948    9787 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:18:06.089180    9787 out.go:177] * Starting "kubernetes-upgrade-999000" primary control-plane node in "kubernetes-upgrade-999000" cluster
	I0503 15:18:06.093261    9787 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:18:06.093280    9787 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:18:06.093293    9787 cache.go:56] Caching tarball of preloaded images
	I0503 15:18:06.093367    9787 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:18:06.093373    9787 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:18:06.093447    9787 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kubernetes-upgrade-999000/config.json ...
	I0503 15:18:06.093914    9787 start.go:360] acquireMachinesLock for kubernetes-upgrade-999000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:18:06.093943    9787 start.go:364] duration metric: took 22.625µs to acquireMachinesLock for "kubernetes-upgrade-999000"
	I0503 15:18:06.093953    9787 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:18:06.093960    9787 fix.go:54] fixHost starting: 
	I0503 15:18:06.094075    9787 fix.go:112] recreateIfNeeded on kubernetes-upgrade-999000: state=Stopped err=<nil>
	W0503 15:18:06.094082    9787 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:18:06.102266    9787 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-999000" ...
	I0503 15:18:06.106234    9787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:ed:0f:7a:f8:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:18:06.108473    9787 main.go:141] libmachine: STDOUT: 
	I0503 15:18:06.108493    9787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:18:06.108527    9787 fix.go:56] duration metric: took 14.566458ms for fixHost
	I0503 15:18:06.108533    9787 start.go:83] releasing machines lock for "kubernetes-upgrade-999000", held for 14.585875ms
	W0503 15:18:06.108542    9787 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:18:06.108588    9787 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:18:06.108592    9787 start.go:728] Will try again in 5 seconds ...
	I0503 15:18:11.110652    9787 start.go:360] acquireMachinesLock for kubernetes-upgrade-999000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:18:11.110904    9787 start.go:364] duration metric: took 194.209µs to acquireMachinesLock for "kubernetes-upgrade-999000"
	I0503 15:18:11.110954    9787 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:18:11.110989    9787 fix.go:54] fixHost starting: 
	I0503 15:18:11.111383    9787 fix.go:112] recreateIfNeeded on kubernetes-upgrade-999000: state=Stopped err=<nil>
	W0503 15:18:11.111403    9787 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:18:11.118748    9787 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-999000" ...
	I0503 15:18:11.122804    9787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:ed:0f:7a:f8:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubernetes-upgrade-999000/disk.qcow2
	I0503 15:18:11.127908    9787 main.go:141] libmachine: STDOUT: 
	I0503 15:18:11.127955    9787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:18:11.128002    9787 fix.go:56] duration metric: took 17.039417ms for fixHost
	I0503 15:18:11.128015    9787 start.go:83] releasing machines lock for "kubernetes-upgrade-999000", held for 17.095917ms
	W0503 15:18:11.128113    9787 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-999000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-999000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:18:11.136675    9787 out.go:177] 
	W0503 15:18:11.139797    9787 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:18:11.139810    9787 out.go:239] * 
	* 
	W0503 15:18:11.140905    9787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:18:11.155574    9787 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-999000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-999000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-999000 version --output=json: exit status 1 (41.3985ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-999000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-05-03 15:18:11.208048 -0700 PDT m=+925.455036209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-999000 -n kubernetes-upgrade-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-999000 -n kubernetes-upgrade-999000: exit status 7 (33.880083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-999000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-999000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-999000
--- FAIL: TestKubernetesUpgrade (18.80s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18793
- KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4132523972/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.21s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.97s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin (arm64)
- MINIKUBE_LOCATION=18793
- KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1840256060/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (576.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2748176339 start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2748176339 start -p stopped-upgrade-139000 --memory=2200 --vm-driver=qemu2 : (41.727583708s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2748176339 -p stopped-upgrade-139000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2748176339 -p stopped-upgrade-139000 stop: (12.09445675s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-139000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-139000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.60415s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-139000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-139000" primary control-plane node in "stopped-upgrade-139000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-139000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:19:06.195579    9866 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:19:06.195708    9866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:19:06.195712    9866 out.go:304] Setting ErrFile to fd 2...
	I0503 15:19:06.195714    9866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:19:06.195857    9866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:19:06.197091    9866 out.go:298] Setting JSON to false
	I0503 15:19:06.215587    9866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4717,"bootTime":1714770029,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:19:06.215656    9866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:19:06.228425    9866 out.go:177] * [stopped-upgrade-139000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:19:06.236859    9866 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:19:06.241863    9866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:19:06.236900    9866 notify.go:220] Checking for updates...
	I0503 15:19:06.247719    9866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:19:06.250822    9866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:19:06.253845    9866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:19:06.256843    9866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:19:06.260193    9866 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:19:06.263854    9866 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0503 15:19:06.266821    9866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:19:06.270805    9866 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:19:06.276871    9866 start.go:297] selected driver: qemu2
	I0503 15:19:06.276880    9866 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:19:06.276967    9866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:19:06.279673    9866 cni.go:84] Creating CNI manager for ""
	I0503 15:19:06.279693    9866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:19:06.279724    9866 start.go:340] cluster config:
	{Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:19:06.279782    9866 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:19:06.285847    9866 out.go:177] * Starting "stopped-upgrade-139000" primary control-plane node in "stopped-upgrade-139000" cluster
	I0503 15:19:06.289760    9866 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0503 15:19:06.289778    9866 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0503 15:19:06.289790    9866 cache.go:56] Caching tarball of preloaded images
	I0503 15:19:06.289849    9866 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:19:06.289854    9866 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0503 15:19:06.289915    9866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/config.json ...
	I0503 15:19:06.290296    9866 start.go:360] acquireMachinesLock for stopped-upgrade-139000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:19:06.290344    9866 start.go:364] duration metric: took 42.667µs to acquireMachinesLock for "stopped-upgrade-139000"
	I0503 15:19:06.290355    9866 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:19:06.290360    9866 fix.go:54] fixHost starting: 
	I0503 15:19:06.290484    9866 fix.go:112] recreateIfNeeded on stopped-upgrade-139000: state=Stopped err=<nil>
	W0503 15:19:06.290495    9866 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:19:06.299231    9866 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-139000" ...
	I0503 15:19:06.303958    9866 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51368-:22,hostfwd=tcp::51369-:2376,hostname=stopped-upgrade-139000 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/disk.qcow2
	I0503 15:19:06.350258    9866 main.go:141] libmachine: STDOUT: 
	I0503 15:19:06.350298    9866 main.go:141] libmachine: STDERR: 
	I0503 15:19:06.350303    9866 main.go:141] libmachine: Waiting for VM to start (ssh -p 51368 docker@127.0.0.1)...
	I0503 15:19:26.494467    9866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/config.json ...
	I0503 15:19:26.495136    9866 machine.go:94] provisionDockerMachine start ...
	I0503 15:19:26.495341    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.495788    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.495802    9866 main.go:141] libmachine: About to run SSH command:
	hostname
	I0503 15:19:26.573495    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0503 15:19:26.573533    9866 buildroot.go:166] provisioning hostname "stopped-upgrade-139000"
	I0503 15:19:26.573647    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.573883    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.573897    9866 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-139000 && echo "stopped-upgrade-139000" | sudo tee /etc/hostname
	I0503 15:19:26.642200    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-139000
	
	I0503 15:19:26.642265    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.642421    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.642435    9866 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-139000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-139000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-139000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0503 15:19:26.698439    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0503 15:19:26.698452    9866 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18793-7269/.minikube CaCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18793-7269/.minikube}
	I0503 15:19:26.698461    9866 buildroot.go:174] setting up certificates
	I0503 15:19:26.698479    9866 provision.go:84] configureAuth start
	I0503 15:19:26.698484    9866 provision.go:143] copyHostCerts
	I0503 15:19:26.698555    9866 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem, removing ...
	I0503 15:19:26.698561    9866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem
	I0503 15:19:26.698654    9866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/key.pem (1675 bytes)
	I0503 15:19:26.698816    9866 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem, removing ...
	I0503 15:19:26.698819    9866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem
	I0503 15:19:26.698862    9866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.pem (1078 bytes)
	I0503 15:19:26.698953    9866 exec_runner.go:144] found /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem, removing ...
	I0503 15:19:26.698956    9866 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem
	I0503 15:19:26.698994    9866 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18793-7269/.minikube/cert.pem (1123 bytes)
	I0503 15:19:26.699077    9866 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-139000 san=[127.0.0.1 localhost minikube stopped-upgrade-139000]
	I0503 15:19:26.792225    9866 provision.go:177] copyRemoteCerts
	I0503 15:19:26.792258    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0503 15:19:26.792266    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:19:26.821044    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0503 15:19:26.827925    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0503 15:19:26.834381    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0503 15:19:26.841396    9866 provision.go:87] duration metric: took 142.912167ms to configureAuth
	I0503 15:19:26.841408    9866 buildroot.go:189] setting minikube options for container-runtime
	I0503 15:19:26.841502    9866 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:19:26.841537    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.841628    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.841636    9866 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0503 15:19:26.895976    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0503 15:19:26.895988    9866 buildroot.go:70] root file system type: tmpfs
	I0503 15:19:26.896043    9866 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0503 15:19:26.896086    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.896193    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.896227    9866 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0503 15:19:26.953336    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0503 15:19:26.953395    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:26.953494    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:26.953502    9866 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0503 15:19:27.302340    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0503 15:19:27.302353    9866 machine.go:97] duration metric: took 807.225666ms to provisionDockerMachine
	I0503 15:19:27.302360    9866 start.go:293] postStartSetup for "stopped-upgrade-139000" (driver="qemu2")
	I0503 15:19:27.302366    9866 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0503 15:19:27.302413    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0503 15:19:27.302421    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:19:27.330568    9866 ssh_runner.go:195] Run: cat /etc/os-release
	I0503 15:19:27.331888    9866 info.go:137] Remote host: Buildroot 2021.02.12
	I0503 15:19:27.331895    9866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18793-7269/.minikube/addons for local assets ...
	I0503 15:19:27.331970    9866 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18793-7269/.minikube/files for local assets ...
	I0503 15:19:27.332063    9866 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem -> 77682.pem in /etc/ssl/certs
	I0503 15:19:27.332152    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0503 15:19:27.334587    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem --> /etc/ssl/certs/77682.pem (1708 bytes)
	I0503 15:19:27.341393    9866 start.go:296] duration metric: took 39.029291ms for postStartSetup
	I0503 15:19:27.341407    9866 fix.go:56] duration metric: took 21.051530375s for fixHost
	I0503 15:19:27.341448    9866 main.go:141] libmachine: Using SSH client type: native
	I0503 15:19:27.341554    9866 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bfdc80] 0x100c004e0 <nil>  [] 0s} localhost 51368 <nil> <nil>}
	I0503 15:19:27.341563    9866 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0503 15:19:27.392243    9866 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714774767.012595962
	
	I0503 15:19:27.392251    9866 fix.go:216] guest clock: 1714774767.012595962
	I0503 15:19:27.392255    9866 fix.go:229] Guest: 2024-05-03 15:19:27.012595962 -0700 PDT Remote: 2024-05-03 15:19:27.34141 -0700 PDT m=+21.171203459 (delta=-328.814038ms)
	I0503 15:19:27.392268    9866 fix.go:200] guest clock delta is within tolerance: -328.814038ms
	I0503 15:19:27.392271    9866 start.go:83] releasing machines lock for "stopped-upgrade-139000", held for 21.102405333s
	I0503 15:19:27.392332    9866 ssh_runner.go:195] Run: cat /version.json
	I0503 15:19:27.392334    9866 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0503 15:19:27.392340    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:19:27.392351    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	W0503 15:19:27.392928    9866 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51368: connect: connection refused
	I0503 15:19:27.392950    9866 retry.go:31] will retry after 326.036256ms: dial tcp [::1]:51368: connect: connection refused
	W0503 15:19:27.418408    9866 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0503 15:19:27.418458    9866 ssh_runner.go:195] Run: systemctl --version
	I0503 15:19:27.420354    9866 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0503 15:19:27.421988    9866 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0503 15:19:27.422026    9866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0503 15:19:27.425223    9866 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0503 15:19:27.429521    9866 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0503 15:19:27.429528    9866 start.go:494] detecting cgroup driver to use...
	I0503 15:19:27.429605    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 15:19:27.435983    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0503 15:19:27.438782    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0503 15:19:27.441797    9866 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0503 15:19:27.441822    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0503 15:19:27.445202    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 15:19:27.448000    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0503 15:19:27.450676    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0503 15:19:27.453877    9866 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0503 15:19:27.457063    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0503 15:19:27.460229    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0503 15:19:27.462807    9866 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0503 15:19:27.465943    9866 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0503 15:19:27.468930    9866 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0503 15:19:27.471557    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:27.556410    9866 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0503 15:19:27.563798    9866 start.go:494] detecting cgroup driver to use...
	I0503 15:19:27.563866    9866 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0503 15:19:27.574981    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0503 15:19:27.585163    9866 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0503 15:19:27.596455    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0503 15:19:27.603797    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0503 15:19:27.608493    9866 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0503 15:19:27.649801    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0503 15:19:27.654882    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0503 15:19:27.660192    9866 ssh_runner.go:195] Run: which cri-dockerd
	I0503 15:19:27.661417    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0503 15:19:27.664459    9866 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0503 15:19:27.669635    9866 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0503 15:19:27.753729    9866 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0503 15:19:27.831728    9866 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0503 15:19:27.831787    9866 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0503 15:19:27.838452    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:27.915250    9866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0503 15:19:29.074809    9866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.159569458s)
	I0503 15:19:29.074868    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0503 15:19:29.079819    9866 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0503 15:19:29.085821    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0503 15:19:29.090106    9866 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0503 15:19:29.171625    9866 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0503 15:19:29.257119    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:29.335212    9866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0503 15:19:29.340992    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0503 15:19:29.345174    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:29.421930    9866 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0503 15:19:29.460753    9866 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0503 15:19:29.460846    9866 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0503 15:19:29.463075    9866 start.go:562] Will wait 60s for crictl version
	I0503 15:19:29.463121    9866 ssh_runner.go:195] Run: which crictl
	I0503 15:19:29.464494    9866 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0503 15:19:29.478436    9866 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0503 15:19:29.478508    9866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0503 15:19:29.494211    9866 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0503 15:19:29.515988    9866 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0503 15:19:29.516107    9866 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0503 15:19:29.517381    9866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0503 15:19:29.522485    9866 kubeadm.go:877] updating cluster {Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0503 15:19:29.522531    9866 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0503 15:19:29.522569    9866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0503 15:19:29.533095    9866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0503 15:19:29.533104    9866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0503 15:19:29.533155    9866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0503 15:19:29.536649    9866 ssh_runner.go:195] Run: which lz4
	I0503 15:19:29.537924    9866 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0503 15:19:29.539154    9866 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0503 15:19:29.539165    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0503 15:19:30.249371    9866 docker.go:649] duration metric: took 711.491167ms to copy over tarball
	I0503 15:19:30.249430    9866 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0503 15:19:31.393770    9866 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.144353125s)
	I0503 15:19:31.393783    9866 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0503 15:19:31.409704    9866 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0503 15:19:31.413450    9866 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0503 15:19:31.418656    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:31.496488    9866 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0503 15:19:33.109419    9866 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.612949542s)
	I0503 15:19:33.109519    9866 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0503 15:19:33.122212    9866 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0503 15:19:33.122223    9866 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0503 15:19:33.122229    9866 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0503 15:19:33.133818    9866 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.133852    9866 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0503 15:19:33.133897    9866 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:33.133975    9866 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:33.134041    9866 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:33.134106    9866 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:33.134224    9866 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:33.134252    9866 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:19:33.143188    9866 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:33.144588    9866 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.144675    9866 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0503 15:19:33.148042    9866 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:33.148063    9866 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:33.148153    9866 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:33.148173    9866 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:33.148487    9866 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	W0503 15:19:33.921053    9866 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0503 15:19:33.921632    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.958187    9866 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0503 15:19:33.958236    9866 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.958336    9866 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:19:33.982706    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0503 15:19:33.982842    9866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0503 15:19:33.984706    9866 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0503 15:19:33.984723    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0503 15:19:34.010324    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0503 15:19:34.011109    9866 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0503 15:19:34.011116    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0503 15:19:34.021306    9866 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0503 15:19:34.021329    9866 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0503 15:19:34.021386    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0503 15:19:34.049399    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:34.082391    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:34.125346    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:34.185898    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:34.197386    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0503 15:19:34.205651    9866 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0503 15:19:34.205739    9866 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:34.282792    9866 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0503 15:19:34.282827    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0503 15:19:34.282853    9866 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0503 15:19:34.282870    9866 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:34.282881    9866 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0503 15:19:34.282893    9866 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:34.282920    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0503 15:19:34.282924    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0503 15:19:34.282930    9866 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0503 15:19:34.282943    9866 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:34.282933    9866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0503 15:19:34.282947    9866 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0503 15:19:34.282958    9866 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:34.282963    9866 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0503 15:19:34.282970    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0503 15:19:34.282973    9866 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0503 15:19:34.282974    9866 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0503 15:19:34.282983    9866 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:34.282989    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0503 15:19:34.282976    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0503 15:19:34.283003    9866 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0503 15:19:34.318875    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0503 15:19:34.318917    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0503 15:19:34.318934    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0503 15:19:34.318941    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0503 15:19:34.318981    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0503 15:19:34.318998    9866 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0503 15:19:34.319003    9866 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0503 15:19:34.319061    9866 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0503 15:19:34.319070    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0503 15:19:34.321344    9866 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0503 15:19:34.321362    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0503 15:19:34.334131    9866 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0503 15:19:34.334146    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0503 15:19:34.379399    9866 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0503 15:19:34.379424    9866 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0503 15:19:34.379432    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0503 15:19:34.420318    9866 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0503 15:19:34.420354    9866 cache_images.go:92] duration metric: took 1.298148917s to LoadCachedImages
	W0503 15:19:34.420396    9866 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0503 15:19:34.420402    9866 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0503 15:19:34.420451    9866 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-139000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0503 15:19:34.420517    9866 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0503 15:19:34.433755    9866 cni.go:84] Creating CNI manager for ""
	I0503 15:19:34.433767    9866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:19:34.433772    9866 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0503 15:19:34.433780    9866 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-139000 NodeName:stopped-upgrade-139000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0503 15:19:34.433839    9866 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-139000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0503 15:19:34.433896    9866 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0503 15:19:34.437266    9866 binaries.go:44] Found k8s binaries, skipping transfer
	I0503 15:19:34.437291    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0503 15:19:34.440137    9866 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0503 15:19:34.445128    9866 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0503 15:19:34.450475    9866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0503 15:19:34.456003    9866 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0503 15:19:34.457216    9866 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0503 15:19:34.461224    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:19:34.538913    9866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:19:34.544249    9866 certs.go:68] Setting up /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000 for IP: 10.0.2.15
	I0503 15:19:34.544257    9866 certs.go:194] generating shared ca certs ...
	I0503 15:19:34.544266    9866 certs.go:226] acquiring lock for ca certs: {Name:mkd5f7db20634f49dfd68d117c1845d0b32f87c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.544423    9866 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.key
	I0503 15:19:34.544463    9866 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.key
	I0503 15:19:34.544468    9866 certs.go:256] generating profile certs ...
	I0503 15:19:34.544533    9866 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.key
	I0503 15:19:34.544550    9866 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee
	I0503 15:19:34.544563    9866 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0503 15:19:34.620433    9866 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee ...
	I0503 15:19:34.620446    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee: {Name:mkfd69199119256217f07b88ee1c6751e2f6621c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.620788    9866 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee ...
	I0503 15:19:34.620798    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee: {Name:mkf29d39b02b6b149fcea2faecc622cbf616741c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.620932    9866 certs.go:381] copying /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt.608353ee -> /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt
	I0503 15:19:34.621045    9866 certs.go:385] copying /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key.608353ee -> /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key
	I0503 15:19:34.621172    9866 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/proxy-client.key
	I0503 15:19:34.621289    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768.pem (1338 bytes)
	W0503 15:19:34.621310    9866 certs.go:480] ignoring /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768_empty.pem, impossibly tiny 0 bytes
	I0503 15:19:34.621315    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca-key.pem (1675 bytes)
	I0503 15:19:34.621333    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem (1078 bytes)
	I0503 15:19:34.621350    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem (1123 bytes)
	I0503 15:19:34.621368    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/key.pem (1675 bytes)
	I0503 15:19:34.621407    9866 certs.go:484] found cert: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem (1708 bytes)
	I0503 15:19:34.621717    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0503 15:19:34.628919    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0503 15:19:34.635861    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0503 15:19:34.642807    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0503 15:19:34.650144    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0503 15:19:34.657092    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0503 15:19:34.664841    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0503 15:19:34.671755    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0503 15:19:34.678683    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/ssl/certs/77682.pem --> /usr/share/ca-certificates/77682.pem (1708 bytes)
	I0503 15:19:34.685717    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0503 15:19:34.693009    9866 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/7768.pem --> /usr/share/ca-certificates/7768.pem (1338 bytes)
	I0503 15:19:34.699898    9866 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0503 15:19:34.704799    9866 ssh_runner.go:195] Run: openssl version
	I0503 15:19:34.706714    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77682.pem && ln -fs /usr/share/ca-certificates/77682.pem /etc/ssl/certs/77682.pem"
	I0503 15:19:34.710017    9866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77682.pem
	I0503 15:19:34.711555    9866 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  3 22:03 /usr/share/ca-certificates/77682.pem
	I0503 15:19:34.711576    9866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77682.pem
	I0503 15:19:34.713277    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77682.pem /etc/ssl/certs/3ec20f2e.0"
	I0503 15:19:34.716377    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0503 15:19:34.719270    9866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:19:34.720578    9866 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  3 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:19:34.720595    9866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0503 15:19:34.722354    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0503 15:19:34.725817    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7768.pem && ln -fs /usr/share/ca-certificates/7768.pem /etc/ssl/certs/7768.pem"
	I0503 15:19:34.729237    9866 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7768.pem
	I0503 15:19:34.730733    9866 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  3 22:03 /usr/share/ca-certificates/7768.pem
	I0503 15:19:34.730751    9866 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7768.pem
	I0503 15:19:34.732565    9866 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7768.pem /etc/ssl/certs/51391683.0"
	I0503 15:19:34.735466    9866 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0503 15:19:34.736762    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0503 15:19:34.739297    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0503 15:19:34.741112    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0503 15:19:34.743266    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0503 15:19:34.744982    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0503 15:19:34.746667    9866 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0503 15:19:34.748470    9866 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51403 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0503 15:19:34.748538    9866 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0503 15:19:34.759073    9866 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0503 15:19:34.762391    9866 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0503 15:19:34.762398    9866 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0503 15:19:34.762401    9866 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0503 15:19:34.762421    9866 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0503 15:19:34.765694    9866 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0503 15:19:34.765978    9866 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-139000" does not appear in /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:19:34.766078    9866 kubeconfig.go:62] /Users/jenkins/minikube-integration/18793-7269/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-139000" cluster setting kubeconfig missing "stopped-upgrade-139000" context setting]
	I0503 15:19:34.766297    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:19:34.766719    9866 kapi.go:59] client config for stopped-upgrade-139000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f8fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:19:34.767033    9866 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0503 15:19:34.770022    9866 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-139000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0503 15:19:34.770027    9866 kubeadm.go:1154] stopping kube-system containers ...
	I0503 15:19:34.770066    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0503 15:19:34.780892    9866 docker.go:483] Stopping containers: [ed9610f55b0b c5583124a53e 4475fda52f0c a482d8d0479c 5b7eb4ef241b c8917f86a920 85023bbf7f9e 273a3c9f75a6]
	I0503 15:19:34.780965    9866 ssh_runner.go:195] Run: docker stop ed9610f55b0b c5583124a53e 4475fda52f0c a482d8d0479c 5b7eb4ef241b c8917f86a920 85023bbf7f9e 273a3c9f75a6
	I0503 15:19:34.792107    9866 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0503 15:19:34.797595    9866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:19:34.800634    9866 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 15:19:34.800646    9866 kubeadm.go:156] found existing configuration files:
	
	I0503 15:19:34.800673    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf
	I0503 15:19:34.803043    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 15:19:34.803061    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:19:34.805790    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf
	I0503 15:19:34.808701    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 15:19:34.808719    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:19:34.811112    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf
	I0503 15:19:34.813625    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 15:19:34.813647    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:19:34.816593    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf
	I0503 15:19:34.819037    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 15:19:34.819057    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:19:34.821986    9866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:19:34.825277    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:34.849437    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.712791    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.848988    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.869459    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0503 15:19:35.891063    9866 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:19:35.891142    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:19:36.391332    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:19:36.893203    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:19:36.897557    9866 api_server.go:72] duration metric: took 1.006519083s to wait for apiserver process to appear ...
	I0503 15:19:36.897566    9866 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:19:36.897574    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:41.899626    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:41.899670    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:46.899981    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:46.900066    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:51.900867    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:51.900957    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:19:56.901828    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:19:56.901884    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:01.902817    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:01.902842    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:06.903890    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:06.903915    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:11.905244    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:11.905296    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:16.907171    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:16.907260    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:21.909613    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:21.909655    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:26.911824    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:26.911878    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:31.913645    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:31.913687    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:36.915859    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:36.916153    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:20:36.945787    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:20:36.945918    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:20:36.963506    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:20:36.963596    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:20:36.978028    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:20:36.978100    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:20:36.989798    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:20:36.989871    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:20:37.000755    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:20:37.000822    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:20:37.011863    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:20:37.011932    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:20:37.022024    9866 logs.go:276] 0 containers: []
	W0503 15:20:37.022038    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:20:37.022096    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:20:37.033190    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:20:37.033208    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:20:37.033213    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:20:37.072763    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:20:37.072774    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:20:37.087552    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:20:37.087565    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:20:37.137402    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:20:37.137414    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:20:37.148854    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:20:37.148867    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:20:37.164730    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:20:37.164746    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:20:37.177784    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:20:37.177796    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:20:37.197481    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:20:37.197493    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:20:37.209774    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:20:37.209786    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:20:37.221156    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:20:37.221168    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:20:37.233172    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:20:37.233185    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:20:37.244708    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:20:37.244719    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:20:37.346416    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:20:37.346427    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:20:37.362105    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:20:37.362116    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:20:37.379944    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:20:37.379958    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:20:37.384740    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:20:37.384747    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:20:37.398883    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:20:37.398900    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:20:39.926943    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:44.927822    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:44.928217    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:20:44.969304    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:20:44.969432    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:20:44.991236    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:20:44.991339    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:20:45.004367    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:20:45.004436    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:20:45.016354    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:20:45.016431    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:20:45.027787    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:20:45.027855    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:20:45.039068    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:20:45.039131    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:20:45.049532    9866 logs.go:276] 0 containers: []
	W0503 15:20:45.049545    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:20:45.049609    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:20:45.060892    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:20:45.060922    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:20:45.060928    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:20:45.100418    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:20:45.100433    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:20:45.138442    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:20:45.138453    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:20:45.151883    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:20:45.151894    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:20:45.163012    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:20:45.163023    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:20:45.176544    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:20:45.176556    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:20:45.194839    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:20:45.194853    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:20:45.210192    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:20:45.210206    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:20:45.222702    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:20:45.222712    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:20:45.247546    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:20:45.247554    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:20:45.251664    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:20:45.251670    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:20:45.265879    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:20:45.265889    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:20:45.277804    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:20:45.277814    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:20:45.315732    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:20:45.315740    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:20:45.327081    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:20:45.327093    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:20:45.340021    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:20:45.340032    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:20:45.357116    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:20:45.357128    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:20:47.877879    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:20:52.880226    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:20:52.880478    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:20:52.902383    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:20:52.902480    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:20:52.917720    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:20:52.917801    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:20:52.930112    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:20:52.930191    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:20:52.943724    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:20:52.943782    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:20:52.953807    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:20:52.953875    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:20:52.964182    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:20:52.964249    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:20:52.974832    9866 logs.go:276] 0 containers: []
	W0503 15:20:52.974843    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:20:52.974897    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:20:52.985421    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:20:52.985442    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:20:52.985448    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:20:52.990066    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:20:52.990072    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:20:53.026975    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:20:53.026993    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:20:53.041762    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:20:53.041776    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:20:53.055186    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:20:53.055204    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:20:53.066630    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:20:53.066642    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:20:53.078262    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:20:53.078276    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:20:53.116204    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:20:53.116212    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:20:53.129773    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:20:53.129783    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:20:53.143808    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:20:53.143817    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:20:53.155067    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:20:53.155077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:20:53.171498    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:20:53.171509    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:20:53.182805    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:20:53.182819    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:20:53.194849    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:20:53.194865    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:20:53.206650    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:20:53.206660    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:20:53.231367    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:20:53.231374    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:20:53.268212    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:20:53.268223    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:20:55.784644    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:00.786900    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:00.787091    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:00.813435    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:00.813543    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:00.827289    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:00.827365    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:00.842391    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:00.842467    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:00.852676    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:00.852762    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:00.862993    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:00.863060    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:00.879981    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:00.880053    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:00.890523    9866 logs.go:276] 0 containers: []
	W0503 15:21:00.890535    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:00.890604    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:00.901564    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:00.901583    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:00.901589    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:00.939166    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:00.939179    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:00.977050    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:00.977065    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:00.991933    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:00.991944    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:00.996601    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:00.996606    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:01.010347    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:01.010358    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:01.021509    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:01.021524    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:01.034781    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:01.034793    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:01.049765    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:01.049780    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:01.069839    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:01.069850    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:01.083629    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:01.083640    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:01.095936    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:01.095947    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:01.110185    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:01.110196    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:01.144552    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:01.144564    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:01.159843    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:01.159857    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:01.170880    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:01.170892    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:01.194744    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:01.198297    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:03.712890    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:08.715423    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:08.715630    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:08.730169    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:08.730263    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:08.742376    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:08.742454    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:08.752904    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:08.752968    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:08.763774    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:08.763854    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:08.778604    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:08.778671    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:08.789219    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:08.789279    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:08.799522    9866 logs.go:276] 0 containers: []
	W0503 15:21:08.799533    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:08.799590    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:08.810027    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:08.810047    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:08.810052    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:08.827392    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:08.827402    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:08.864942    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:08.864952    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:08.869371    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:08.869379    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:08.903208    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:08.903220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:08.917376    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:08.917388    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:08.929337    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:08.929352    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:08.941175    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:08.941187    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:08.958667    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:08.958677    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:08.969607    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:08.969619    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:08.981409    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:08.981421    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:09.019110    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:09.019124    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:09.043312    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:09.043325    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:09.057921    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:09.057935    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:09.071581    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:09.071592    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:09.086402    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:09.086412    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:09.097456    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:09.097467    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:11.610685    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:16.612801    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:16.612969    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:16.629037    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:16.629125    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:16.641565    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:16.641635    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:16.652113    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:16.652186    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:16.662744    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:16.662818    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:16.673008    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:16.673080    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:16.683407    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:16.683479    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:16.694088    9866 logs.go:276] 0 containers: []
	W0503 15:21:16.694105    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:16.694166    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:16.704304    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:16.704322    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:16.704327    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:16.718124    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:16.718135    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:16.729845    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:16.729860    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:16.767661    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:16.767673    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:16.779688    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:16.779701    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:16.793119    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:16.793129    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:16.804816    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:16.804828    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:16.819614    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:16.819624    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:16.833346    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:16.833356    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:16.837891    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:16.837897    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:16.851935    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:16.851945    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:16.889289    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:16.889305    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:16.907209    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:16.907220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:16.920262    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:16.920272    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:16.931646    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:16.931659    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:16.946925    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:16.946936    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:16.970827    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:16.970836    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:19.509139    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:24.511251    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:24.511480    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:24.540953    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:24.541053    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:24.558023    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:24.558099    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:24.571024    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:24.571096    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:24.581993    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:24.582055    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:24.595718    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:24.595786    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:24.606109    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:24.606169    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:24.627725    9866 logs.go:276] 0 containers: []
	W0503 15:21:24.627739    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:24.627798    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:24.639488    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:24.639506    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:24.639511    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:24.677755    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:24.677767    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:24.715931    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:24.715944    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:24.730944    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:24.730956    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:24.747956    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:24.747966    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:24.760888    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:24.760901    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:24.799454    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:24.799463    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:24.804121    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:24.804129    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:24.815636    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:24.815647    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:24.827259    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:24.827269    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:24.841445    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:24.841456    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:24.856011    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:24.856021    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:24.866733    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:24.866745    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:24.878390    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:24.878400    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:24.892161    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:24.892171    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:24.906324    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:24.906334    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:24.917680    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:24.917695    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:27.442710    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:32.445030    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:32.445407    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:32.480967    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:32.481100    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:32.501579    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:32.501664    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:32.516699    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:32.516781    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:32.529074    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:32.529146    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:32.540956    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:32.541022    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:32.556338    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:32.556401    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:32.567687    9866 logs.go:276] 0 containers: []
	W0503 15:21:32.567700    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:32.567756    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:32.578561    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:32.578578    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:32.578584    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:32.590475    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:32.590488    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:32.604942    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:32.604954    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:32.616920    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:32.616931    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:32.621097    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:32.621106    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:32.658245    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:32.658257    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:32.670756    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:32.670769    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:32.688216    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:32.688226    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:32.700222    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:32.700234    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:32.725612    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:32.725621    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:32.762569    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:32.762580    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:32.777245    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:32.777258    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:32.789150    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:32.789163    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:32.827724    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:32.827736    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:32.845083    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:32.845095    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:32.859251    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:32.859262    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:32.873887    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:32.873900    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:35.390654    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:40.392821    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:40.393198    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:40.422272    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:40.422396    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:40.444315    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:40.444406    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:40.457580    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:40.457648    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:40.469505    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:40.469568    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:40.479962    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:40.480029    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:40.490601    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:40.490666    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:40.500478    9866 logs.go:276] 0 containers: []
	W0503 15:21:40.500490    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:40.500549    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:40.510665    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:40.510682    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:40.510688    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:40.528051    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:40.528064    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:40.547841    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:40.547853    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:40.559465    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:40.559476    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:40.598562    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:40.598573    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:40.602705    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:40.602710    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:40.616490    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:40.616502    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:40.631111    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:40.631122    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:40.643749    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:40.643760    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:40.680037    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:40.680052    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:40.704617    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:40.704625    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:40.719700    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:40.719712    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:40.758152    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:40.758165    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:40.770432    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:40.770442    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:40.781472    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:40.781483    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:40.816830    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:40.816842    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:40.828678    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:40.828692    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:43.344354    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:48.346542    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:48.346726    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:48.362314    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:48.362397    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:48.374547    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:48.374617    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:48.385263    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:48.385328    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:48.398927    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:48.398997    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:48.409178    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:48.409244    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:48.419952    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:48.420017    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:48.430838    9866 logs.go:276] 0 containers: []
	W0503 15:21:48.430850    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:48.430909    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:48.441369    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:48.441388    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:48.441393    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:48.445831    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:48.445836    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:48.460487    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:48.460496    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:48.473096    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:48.473108    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:48.489505    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:48.489517    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:48.500706    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:48.500717    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:48.537230    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:48.537239    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:48.554249    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:48.554258    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:48.568400    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:48.568414    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:48.580145    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:48.580157    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:48.596434    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:48.596446    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:48.608207    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:48.608220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:48.645871    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:48.645880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:48.657670    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:48.657679    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:48.683000    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:48.683010    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:48.707436    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:48.707443    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:48.719612    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:48.719625    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:51.261520    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:21:56.263645    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:21:56.263785    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:21:56.276623    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:21:56.276708    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:21:56.287966    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:21:56.288039    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:21:56.298953    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:21:56.299021    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:21:56.309767    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:21:56.309834    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:21:56.320354    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:21:56.320415    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:21:56.330871    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:21:56.330933    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:21:56.340989    9866 logs.go:276] 0 containers: []
	W0503 15:21:56.341001    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:21:56.341062    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:21:56.351481    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:21:56.351500    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:21:56.351505    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:21:56.390700    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:21:56.390711    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:21:56.402063    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:21:56.402077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:21:56.415467    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:21:56.415477    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:21:56.428780    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:21:56.428793    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:21:56.440010    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:21:56.440020    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:21:56.450941    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:21:56.450957    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:21:56.473590    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:21:56.473597    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:21:56.510310    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:21:56.510321    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:21:56.522027    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:21:56.522038    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:21:56.536179    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:21:56.536191    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:21:56.550888    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:21:56.550896    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:21:56.563061    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:21:56.563071    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:21:56.578327    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:21:56.578341    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:21:56.582366    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:21:56.582372    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:21:56.597722    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:21:56.597733    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:21:56.615631    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:21:56.615641    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:21:59.152516    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:04.154689    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:04.154978    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:04.182556    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:04.182684    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:04.201037    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:04.201141    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:04.214711    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:04.214782    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:04.225948    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:04.226015    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:04.236839    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:04.236912    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:04.247074    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:04.247138    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:04.257044    9866 logs.go:276] 0 containers: []
	W0503 15:22:04.257058    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:04.257112    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:04.267317    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:04.267334    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:04.267339    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:04.285718    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:04.285728    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:04.297726    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:04.297738    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:04.334491    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:04.334503    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:04.348718    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:04.348728    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:04.363348    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:04.363364    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:04.378052    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:04.378063    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:04.390302    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:04.390314    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:04.428501    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:04.428515    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:04.467223    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:04.467232    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:04.481818    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:04.481827    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:04.506542    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:04.506551    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:04.517724    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:04.517736    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:04.533949    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:04.533958    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:04.538177    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:04.538186    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:04.549410    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:04.549421    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:04.561504    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:04.561514    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:07.074806    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:12.077001    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:12.077094    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:12.087913    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:12.087986    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:12.098564    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:12.098635    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:12.109133    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:12.109203    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:12.121057    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:12.121126    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:12.131427    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:12.131497    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:12.142072    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:12.142152    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:12.152164    9866 logs.go:276] 0 containers: []
	W0503 15:22:12.152176    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:12.152246    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:12.168385    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:12.168404    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:12.168410    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:12.173043    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:12.173049    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:12.184288    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:12.184300    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:12.198351    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:12.198364    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:12.236419    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:12.236429    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:12.250264    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:12.250277    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:12.262224    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:12.262236    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:12.276976    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:12.276987    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:12.301434    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:12.301451    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:12.339109    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:12.339119    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:12.374062    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:12.374074    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:12.391909    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:12.391920    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:12.404942    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:12.404956    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:12.416075    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:12.416086    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:12.427926    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:12.427937    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:12.439752    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:12.439761    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:12.455039    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:12.455050    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:14.970956    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:19.973163    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:19.973427    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:19.999684    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:19.999811    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:20.016879    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:20.016971    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:20.029992    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:20.030069    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:20.041796    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:20.041868    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:20.055873    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:20.055940    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:20.066311    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:20.066370    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:20.077009    9866 logs.go:276] 0 containers: []
	W0503 15:22:20.077023    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:20.077076    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:20.087505    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:20.087522    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:20.087527    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:20.099062    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:20.099077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:20.110425    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:20.110436    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:20.121980    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:20.121991    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:20.126180    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:20.126186    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:20.140101    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:20.140110    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:20.151916    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:20.151927    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:20.174505    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:20.174515    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:20.191225    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:20.191240    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:20.202789    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:20.202799    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:20.220504    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:20.220521    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:20.233702    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:20.233713    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:20.272755    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:20.272765    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:20.310002    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:20.310013    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:20.321501    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:20.321514    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:20.336077    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:20.336088    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:20.371556    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:20.371567    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:22.885754    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:27.888179    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:27.888631    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:27.930927    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:27.931060    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:27.954051    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:27.954142    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:27.969042    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:27.969121    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:27.983187    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:27.983261    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:27.993584    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:27.993646    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:28.004921    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:28.004991    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:28.015539    9866 logs.go:276] 0 containers: []
	W0503 15:22:28.015551    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:28.015607    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:28.029278    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:28.029297    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:28.029302    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:28.046917    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:28.046929    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:28.063220    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:28.063231    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:28.079886    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:28.079902    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:28.117820    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:28.117832    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:28.152920    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:28.152934    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:28.166836    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:28.166848    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:28.180222    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:28.180234    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:28.194476    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:28.194489    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:28.198624    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:28.198631    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:28.213268    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:28.213278    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:28.224603    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:28.224614    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:28.235943    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:28.235953    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:28.272981    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:28.272996    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:28.285121    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:28.285131    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:28.302618    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:28.302629    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:28.325298    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:28.325304    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:30.839441    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:35.842052    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:35.842303    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:35.876775    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:35.876919    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:35.901656    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:35.901754    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:35.917133    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:35.917213    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:35.933522    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:35.933591    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:35.945757    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:35.945828    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:35.956569    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:35.956636    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:35.967181    9866 logs.go:276] 0 containers: []
	W0503 15:22:35.967199    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:35.967259    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:35.977907    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:35.977925    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:35.977930    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:35.989872    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:35.989884    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:36.003731    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:36.003742    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:36.015910    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:36.015921    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:36.030599    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:36.030609    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:36.067690    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:36.067703    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:36.082455    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:36.082471    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:36.103435    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:36.103457    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:36.110814    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:36.110827    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:36.127023    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:36.127035    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:36.139167    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:36.139177    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:36.162183    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:36.162206    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:36.181700    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:36.181710    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:36.219970    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:36.219981    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:36.238835    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:36.238849    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:36.250003    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:36.250016    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:36.261595    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:36.261611    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:38.801690    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:43.804248    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:43.804511    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:43.831134    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:43.831253    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:43.848715    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:43.848802    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:43.861767    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:43.861836    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:43.876377    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:43.876441    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:43.886309    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:43.886369    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:43.896531    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:43.896598    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:43.906855    9866 logs.go:276] 0 containers: []
	W0503 15:22:43.906867    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:43.906925    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:43.917209    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:43.917228    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:43.917233    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:43.928246    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:43.928259    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:43.944533    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:43.944544    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:43.955708    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:43.955720    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:43.967602    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:43.967617    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:43.981289    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:43.981300    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:43.995866    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:43.995880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:44.007066    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:44.007079    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:44.018690    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:44.018700    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:44.031598    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:44.031609    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:44.071267    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:44.071278    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:44.107760    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:44.107772    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:44.122263    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:44.122277    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:44.140871    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:44.140881    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:44.168432    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:44.168441    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:44.172649    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:44.172656    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:44.211363    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:44.211384    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:46.731360    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:51.733710    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:51.734088    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:51.771942    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:51.772078    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:51.792632    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:51.792723    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:51.807084    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:51.807160    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:51.823590    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:51.823665    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:51.834018    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:51.834092    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:51.844880    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:51.844952    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:51.854879    9866 logs.go:276] 0 containers: []
	W0503 15:22:51.854892    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:51.854951    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:51.865111    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:51.865129    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:51.865134    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:51.879807    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:22:51.879820    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:22:51.891143    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:51.891155    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:51.914807    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:22:51.914821    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:22:51.932070    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:51.932079    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:51.949647    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:51.949657    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:51.973150    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:51.973161    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:51.977227    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:22:51.977237    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:22:51.991436    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:51.991449    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:52.003222    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:52.003233    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:22:52.041870    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:52.041880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:52.058920    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:22:52.058931    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:22:52.070244    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:52.070257    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:52.084961    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:52.084971    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:52.096721    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:52.096732    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:52.133732    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:52.133740    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:52.151451    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:52.151461    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:54.690886    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:22:59.693026    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:22:59.693209    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:22:59.711206    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:22:59.711295    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:22:59.725873    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:22:59.725947    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:22:59.737225    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:22:59.737291    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:22:59.747548    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:22:59.747614    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:22:59.758154    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:22:59.758221    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:22:59.768739    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:22:59.768803    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:22:59.779023    9866 logs.go:276] 0 containers: []
	W0503 15:22:59.779034    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:22:59.779088    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:22:59.789544    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:22:59.789572    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:22:59.789580    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:22:59.824498    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:22:59.824511    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:22:59.838352    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:22:59.838364    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:22:59.853534    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:22:59.853543    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:22:59.867186    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:22:59.867197    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:22:59.878372    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:22:59.878385    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:22:59.896657    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:22:59.896668    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:22:59.908207    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:22:59.908217    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:22:59.944547    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:22:59.944559    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:22:59.948802    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:22:59.948810    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:22:59.963513    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:22:59.963525    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:22:59.985268    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:22:59.985278    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:22:59.997048    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:22:59.997060    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:00.039917    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:00.039932    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:00.054084    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:00.054097    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:00.065277    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:00.065288    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:00.076640    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:00.076652    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:02.595834    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:07.598098    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:07.598337    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:07.621595    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:07.621696    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:07.637019    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:07.637093    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:07.649667    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:07.649729    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:07.660938    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:07.661004    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:07.672826    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:07.672887    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:07.683360    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:07.683415    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:07.696036    9866 logs.go:276] 0 containers: []
	W0503 15:23:07.696049    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:07.696102    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:07.706406    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:07.706426    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:07.706431    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:07.744311    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:07.744321    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:07.757722    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:07.757732    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:07.771091    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:07.771103    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:07.806777    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:07.806789    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:07.820551    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:07.820562    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:07.832889    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:07.832904    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:07.850239    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:07.850249    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:07.861825    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:07.861835    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:07.900546    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:07.900555    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:07.916227    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:07.916236    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:07.927551    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:07.927563    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:07.943375    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:07.943384    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:07.965076    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:07.965083    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:07.969458    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:07.969467    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:07.983462    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:07.983473    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:07.999893    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:07.999903    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:10.516274    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:15.518809    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:15.519211    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:15.554497    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:15.554628    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:15.578563    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:15.578662    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:15.595108    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:15.595189    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:15.608685    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:15.608764    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:15.619105    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:15.619178    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:15.630386    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:15.630456    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:15.640841    9866 logs.go:276] 0 containers: []
	W0503 15:23:15.640856    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:15.640921    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:15.652081    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:15.652100    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:15.652106    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:15.688801    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:15.688816    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:15.703167    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:15.703178    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:15.717984    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:15.717998    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:15.722597    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:15.722604    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:15.736994    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:15.737007    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:15.775656    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:15.775668    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:15.798400    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:15.798407    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:15.809555    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:15.809567    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:15.821551    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:15.821563    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:15.833240    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:15.833251    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:15.846451    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:15.846461    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:15.885730    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:15.885741    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:15.900621    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:15.900632    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:15.918011    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:15.918021    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:15.929492    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:15.929502    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:15.941982    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:15.941995    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:18.456008    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:23.458410    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:23.458524    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:23.469394    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:23.469462    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:23.479969    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:23.480044    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:23.492272    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:23.492345    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:23.502739    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:23.502811    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:23.513451    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:23.513510    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:23.523642    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:23.523712    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:23.533910    9866 logs.go:276] 0 containers: []
	W0503 15:23:23.533921    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:23.533978    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:23.544202    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:23.544221    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:23.544226    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:23.555705    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:23.555715    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:23.568970    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:23.568981    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:23.580199    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:23.580210    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:23.593836    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:23.593848    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:23.608781    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:23.608792    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:23.619762    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:23.619773    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:23.637345    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:23.637358    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:23.654746    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:23.654761    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:23.692486    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:23.692496    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:23.726493    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:23.726504    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:23.764397    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:23.764411    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:23.779117    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:23.779128    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:23.790530    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:23.790541    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:23.794631    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:23.794638    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:23.808281    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:23.808291    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:23.819079    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:23.819090    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:26.344339    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:31.346593    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:31.346738    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:23:31.361526    9866 logs.go:276] 2 containers: [9153de1fe283 5b7eb4ef241b]
	I0503 15:23:31.361590    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:23:31.383161    9866 logs.go:276] 2 containers: [40c7a6c8aada c5583124a53e]
	I0503 15:23:31.383233    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:23:31.412840    9866 logs.go:276] 1 containers: [a0cf5fe4185b]
	I0503 15:23:31.412906    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:23:31.423413    9866 logs.go:276] 2 containers: [b8238d29950d 4475fda52f0c]
	I0503 15:23:31.423481    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:23:31.433904    9866 logs.go:276] 1 containers: [acb5f92ab28d]
	I0503 15:23:31.433966    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:23:31.444317    9866 logs.go:276] 2 containers: [ecab85e51144 ed9610f55b0b]
	I0503 15:23:31.444386    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:23:31.453793    9866 logs.go:276] 0 containers: []
	W0503 15:23:31.453803    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:23:31.453856    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:23:31.463792    9866 logs.go:276] 2 containers: [537c7b40f8ef c67b61af436e]
	I0503 15:23:31.463814    9866 logs.go:123] Gathering logs for kube-scheduler [4475fda52f0c] ...
	I0503 15:23:31.463820    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4475fda52f0c"
	I0503 15:23:31.479042    9866 logs.go:123] Gathering logs for kube-proxy [acb5f92ab28d] ...
	I0503 15:23:31.479053    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acb5f92ab28d"
	I0503 15:23:31.491341    9866 logs.go:123] Gathering logs for storage-provisioner [537c7b40f8ef] ...
	I0503 15:23:31.491352    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 537c7b40f8ef"
	I0503 15:23:31.503535    9866 logs.go:123] Gathering logs for storage-provisioner [c67b61af436e] ...
	I0503 15:23:31.503549    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c67b61af436e"
	I0503 15:23:31.514818    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:23:31.514834    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:23:31.526496    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:23:31.526511    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:23:31.530535    9866 logs.go:123] Gathering logs for etcd [c5583124a53e] ...
	I0503 15:23:31.530541    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5583124a53e"
	I0503 15:23:31.545060    9866 logs.go:123] Gathering logs for kube-scheduler [b8238d29950d] ...
	I0503 15:23:31.545070    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8238d29950d"
	I0503 15:23:31.556870    9866 logs.go:123] Gathering logs for kube-controller-manager [ecab85e51144] ...
	I0503 15:23:31.556880    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecab85e51144"
	I0503 15:23:31.573743    9866 logs.go:123] Gathering logs for kube-apiserver [9153de1fe283] ...
	I0503 15:23:31.573753    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9153de1fe283"
	I0503 15:23:31.587811    9866 logs.go:123] Gathering logs for kube-apiserver [5b7eb4ef241b] ...
	I0503 15:23:31.587822    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b7eb4ef241b"
	I0503 15:23:31.626584    9866 logs.go:123] Gathering logs for etcd [40c7a6c8aada] ...
	I0503 15:23:31.626596    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c7a6c8aada"
	I0503 15:23:31.640270    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:23:31.640280    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:23:31.662598    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:23:31.662605    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:23:31.700455    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:23:31.700463    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:23:31.734423    9866 logs.go:123] Gathering logs for coredns [a0cf5fe4185b] ...
	I0503 15:23:31.734435    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0cf5fe4185b"
	I0503 15:23:31.746332    9866 logs.go:123] Gathering logs for kube-controller-manager [ed9610f55b0b] ...
	I0503 15:23:31.746344    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed9610f55b0b"
	I0503 15:23:34.262392    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:39.264750    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:39.264883    9866 kubeadm.go:591] duration metric: took 4m4.508080917s to restartPrimaryControlPlane
	W0503 15:23:39.265020    9866 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0503 15:23:39.265082    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0503 15:23:40.341019    9866 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.075946834s)
	I0503 15:23:40.341105    9866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0503 15:23:40.346104    9866 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0503 15:23:40.349012    9866 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0503 15:23:40.351632    9866 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0503 15:23:40.351639    9866 kubeadm.go:156] found existing configuration files:
	
	I0503 15:23:40.351660    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf
	I0503 15:23:40.353959    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0503 15:23:40.353981    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0503 15:23:40.356590    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf
	I0503 15:23:40.359389    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0503 15:23:40.359409    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0503 15:23:40.361912    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf
	I0503 15:23:40.364973    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0503 15:23:40.364995    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0503 15:23:40.368091    9866 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf
	I0503 15:23:40.370656    9866 kubeadm.go:162] "https://control-plane.minikube.internal:51403" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51403 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0503 15:23:40.370679    9866 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0503 15:23:40.373505    9866 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0503 15:23:40.391867    9866 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0503 15:23:40.391903    9866 kubeadm.go:309] [preflight] Running pre-flight checks
	I0503 15:23:40.443997    9866 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0503 15:23:40.444056    9866 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0503 15:23:40.444112    9866 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0503 15:23:40.493153    9866 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0503 15:23:40.497283    9866 out.go:204]   - Generating certificates and keys ...
	I0503 15:23:40.497378    9866 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0503 15:23:40.497450    9866 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0503 15:23:40.497541    9866 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0503 15:23:40.497581    9866 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0503 15:23:40.497655    9866 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0503 15:23:40.497768    9866 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0503 15:23:40.497881    9866 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0503 15:23:40.497998    9866 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0503 15:23:40.498062    9866 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0503 15:23:40.498123    9866 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0503 15:23:40.498207    9866 kubeadm.go:309] [certs] Using the existing "sa" key
	I0503 15:23:40.498299    9866 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0503 15:23:40.824807    9866 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0503 15:23:40.925458    9866 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0503 15:23:41.036175    9866 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0503 15:23:41.112353    9866 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0503 15:23:41.139434    9866 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0503 15:23:41.140020    9866 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0503 15:23:41.140039    9866 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0503 15:23:41.230797    9866 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0503 15:23:41.234569    9866 out.go:204]   - Booting up control plane ...
	I0503 15:23:41.234616    9866 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0503 15:23:41.234659    9866 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0503 15:23:41.234703    9866 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0503 15:23:41.234752    9866 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0503 15:23:41.234826    9866 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0503 15:23:46.239775    9866 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.005833 seconds
	I0503 15:23:46.239897    9866 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0503 15:23:46.248056    9866 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0503 15:23:46.761890    9866 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0503 15:23:46.762047    9866 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-139000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0503 15:23:47.267721    9866 kubeadm.go:309] [bootstrap-token] Using token: rykde1.sku9qwhqyxujsdfz
	I0503 15:23:47.271637    9866 out.go:204]   - Configuring RBAC rules ...
	I0503 15:23:47.271713    9866 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0503 15:23:47.271770    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0503 15:23:47.278084    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0503 15:23:47.279396    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0503 15:23:47.280355    9866 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0503 15:23:47.281857    9866 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0503 15:23:47.286303    9866 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0503 15:23:47.448730    9866 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0503 15:23:47.673517    9866 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0503 15:23:47.673970    9866 kubeadm.go:309] 
	I0503 15:23:47.673999    9866 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0503 15:23:47.674007    9866 kubeadm.go:309] 
	I0503 15:23:47.674045    9866 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0503 15:23:47.674051    9866 kubeadm.go:309] 
	I0503 15:23:47.674068    9866 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0503 15:23:47.674097    9866 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0503 15:23:47.674129    9866 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0503 15:23:47.674134    9866 kubeadm.go:309] 
	I0503 15:23:47.674157    9866 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0503 15:23:47.674164    9866 kubeadm.go:309] 
	I0503 15:23:47.674183    9866 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0503 15:23:47.674185    9866 kubeadm.go:309] 
	I0503 15:23:47.674209    9866 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0503 15:23:47.674245    9866 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0503 15:23:47.674291    9866 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0503 15:23:47.674297    9866 kubeadm.go:309] 
	I0503 15:23:47.674335    9866 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0503 15:23:47.674373    9866 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0503 15:23:47.674377    9866 kubeadm.go:309] 
	I0503 15:23:47.674424    9866 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rykde1.sku9qwhqyxujsdfz \
	I0503 15:23:47.674470    9866 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 \
	I0503 15:23:47.674482    9866 kubeadm.go:309] 	--control-plane 
	I0503 15:23:47.674485    9866 kubeadm.go:309] 
	I0503 15:23:47.674525    9866 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0503 15:23:47.674527    9866 kubeadm.go:309] 
	I0503 15:23:47.674576    9866 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rykde1.sku9qwhqyxujsdfz \
	I0503 15:23:47.674620    9866 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:33737b87ad0e0d503b26dd571c4ff24ab2c323775c7952fd1688c095e7432c54 
	I0503 15:23:47.674766    9866 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0503 15:23:47.674846    9866 cni.go:84] Creating CNI manager for ""
	I0503 15:23:47.674856    9866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:23:47.677538    9866 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0503 15:23:47.680532    9866 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0503 15:23:47.685257    9866 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0503 15:23:47.690369    9866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0503 15:23:47.690422    9866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-139000 minikube.k8s.io/updated_at=2024_05_03T15_23_47_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=cc00050a34cebd4ea4e95f76540d25d17abab09a minikube.k8s.io/name=stopped-upgrade-139000 minikube.k8s.io/primary=true
	I0503 15:23:47.690423    9866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0503 15:23:47.723070    9866 kubeadm.go:1107] duration metric: took 32.691625ms to wait for elevateKubeSystemPrivileges
	I0503 15:23:47.732188    9866 ops.go:34] apiserver oom_adj: -16
	W0503 15:23:47.732213    9866 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0503 15:23:47.732219    9866 kubeadm.go:393] duration metric: took 4m12.989554s to StartCluster
	I0503 15:23:47.732229    9866 settings.go:142] acquiring lock: {Name:mkee9fdcf0e1a69d3ca7e09bf6e6cf0362ae7cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:23:47.732320    9866 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:23:47.732758    9866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/kubeconfig: {Name:mke212dafcd3f736eb33656fd60033aeff2dfcbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:23:47.732972    9866 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:23:47.737594    9866 out.go:177] * Verifying Kubernetes components...
	I0503 15:23:47.732981    9866 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0503 15:23:47.733057    9866 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:23:47.745474    9866 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-139000"
	I0503 15:23:47.745489    9866 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-139000"
	W0503 15:23:47.745495    9866 addons.go:243] addon storage-provisioner should already be in state true
	I0503 15:23:47.745512    9866 host.go:66] Checking if "stopped-upgrade-139000" exists ...
	I0503 15:23:47.745518    9866 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-139000"
	I0503 15:23:47.745540    9866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0503 15:23:47.745577    9866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-139000"
	I0503 15:23:47.747002    9866 kapi.go:59] client config for stopped-upgrade-139000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/stopped-upgrade-139000/client.key", CAFile:"/Users/jenkins/minikube-integration/18793-7269/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f8fcb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0503 15:23:47.747188    9866 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-139000"
	W0503 15:23:47.747194    9866 addons.go:243] addon default-storageclass should already be in state true
	I0503 15:23:47.747203    9866 host.go:66] Checking if "stopped-upgrade-139000" exists ...
	I0503 15:23:47.749392    9866 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0503 15:23:47.753483    9866 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:23:47.753492    9866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0503 15:23:47.753501    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:23:47.754202    9866 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0503 15:23:47.754208    9866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0503 15:23:47.754212    9866 sshutil.go:53] new ssh client: &{IP:localhost Port:51368 SSHKeyPath:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/stopped-upgrade-139000/id_rsa Username:docker}
	I0503 15:23:47.840141    9866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0503 15:23:47.844858    9866 api_server.go:52] waiting for apiserver process to appear ...
	I0503 15:23:47.844897    9866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0503 15:23:47.849526    9866 api_server.go:72] duration metric: took 116.5455ms to wait for apiserver process to appear ...
	I0503 15:23:47.849538    9866 api_server.go:88] waiting for apiserver healthz status ...
	I0503 15:23:47.849548    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:47.865556    9866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0503 15:23:47.867165    9866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0503 15:23:52.851605    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:52.851658    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:23:57.851918    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:23:57.851949    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:02.852195    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:02.852217    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:07.852533    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:07.852555    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:12.853014    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:12.853033    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:17.853613    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:17.853636    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0503 15:24:18.214936    9866 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0503 15:24:18.219525    9866 out.go:177] * Enabled addons: storage-provisioner
	I0503 15:24:18.225241    9866 addons.go:505] duration metric: took 30.492949916s for enable addons: enabled=[storage-provisioner]
	I0503 15:24:22.854835    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:22.854861    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:27.857157    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:27.857185    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:32.858887    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:32.858907    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:37.860956    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:37.860981    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:42.861101    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:42.861144    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:47.863256    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:47.863357    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:24:47.874036    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:24:47.874101    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:24:47.884093    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:24:47.884159    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:24:47.894459    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:24:47.894523    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:24:47.905325    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:24:47.905386    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:24:47.915387    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:24:47.915466    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:24:47.927098    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:24:47.927166    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:24:47.937276    9866 logs.go:276] 0 containers: []
	W0503 15:24:47.937286    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:24:47.937335    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:24:47.947445    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:24:47.947465    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:24:47.947472    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:24:47.951928    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:24:47.951934    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:24:47.988288    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:24:47.988302    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:24:48.002712    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:24:48.002722    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:24:48.016709    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:24:48.016720    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:24:48.031738    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:24:48.031748    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:24:48.043330    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:24:48.043342    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:24:48.057231    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:24:48.057244    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:24:48.092583    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:24:48.092595    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:24:48.103318    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:24:48.103328    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:24:48.126654    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:24:48.126666    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:24:48.138094    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:24:48.138104    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:24:48.156555    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:24:48.156569    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:24:50.669998    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:24:55.671981    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:24:55.672050    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:24:55.682489    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:24:55.682549    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:24:55.693192    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:24:55.693255    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:24:55.704572    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:24:55.704635    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:24:55.716400    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:24:55.716464    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:24:55.727023    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:24:55.727085    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:24:55.737888    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:24:55.737948    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:24:55.747796    9866 logs.go:276] 0 containers: []
	W0503 15:24:55.747807    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:24:55.747859    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:24:55.758209    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:24:55.758228    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:24:55.758233    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:24:55.772801    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:24:55.772811    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:24:55.794243    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:24:55.794254    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:24:55.820945    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:24:55.820958    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:24:55.832788    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:24:55.832802    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:24:55.837044    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:24:55.837052    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:24:55.855292    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:24:55.855306    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:24:55.866710    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:24:55.866723    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:24:55.877833    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:24:55.877843    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:24:55.892815    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:24:55.892828    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:24:55.904053    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:24:55.904065    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:24:55.917339    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:24:55.917352    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:24:55.950058    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:24:55.950068    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:24:58.487033    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:03.489662    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:03.490065    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:03.528062    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:03.528198    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:03.555314    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:03.555405    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:03.569502    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:03.569577    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:03.581384    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:03.581462    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:03.591925    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:03.591992    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:03.602195    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:03.602254    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:03.611829    9866 logs.go:276] 0 containers: []
	W0503 15:25:03.611840    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:03.611889    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:03.623187    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:03.623202    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:03.623206    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:03.634768    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:03.634779    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:03.649861    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:03.649874    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:03.661065    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:03.661078    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:03.699078    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:03.699092    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:03.715397    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:03.715410    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:03.729207    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:03.729216    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:03.740475    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:03.740488    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:03.751535    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:03.751548    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:03.768707    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:03.768716    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:03.800822    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:03.800830    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:03.804786    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:03.804792    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:03.829016    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:03.829024    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:06.342608    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:11.345277    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:11.345714    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:11.389404    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:11.389513    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:11.408805    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:11.408888    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:11.423154    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:11.423228    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:11.435434    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:11.435491    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:11.445672    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:11.445736    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:11.455988    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:11.456060    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:11.466274    9866 logs.go:276] 0 containers: []
	W0503 15:25:11.466284    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:11.466332    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:11.476343    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:11.476366    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:11.476371    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:11.494568    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:11.494578    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:11.508591    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:11.508599    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:11.520270    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:11.520280    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:11.531555    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:11.531566    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:11.542546    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:11.542554    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:11.576662    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:11.576675    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:11.611996    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:11.612006    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:11.623959    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:11.623969    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:11.639150    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:11.639160    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:11.657565    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:11.657574    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:11.680604    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:11.680611    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:11.684439    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:11.684445    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:14.198040    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:19.200431    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:19.200884    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:19.249203    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:19.249318    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:19.268306    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:19.268403    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:19.281994    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:19.282062    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:19.293675    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:19.293742    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:19.304360    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:19.304431    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:19.314591    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:19.314658    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:19.324523    9866 logs.go:276] 0 containers: []
	W0503 15:25:19.324534    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:19.324585    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:19.334818    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:19.334834    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:19.334838    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:19.351734    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:19.351745    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:19.363657    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:19.363671    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:19.375251    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:19.375264    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:19.386888    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:19.386899    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:19.391527    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:19.391536    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:19.426350    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:19.426361    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:19.440167    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:19.440179    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:19.454050    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:19.454063    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:19.469039    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:19.469049    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:19.483808    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:19.483819    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:19.495429    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:19.495440    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:19.527733    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:19.527740    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:22.052615    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:27.055131    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:27.055493    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:27.101309    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:27.101436    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:27.121480    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:27.121571    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:27.139532    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:27.139595    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:27.150906    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:27.150969    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:27.161328    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:27.161383    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:27.173085    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:27.173154    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:27.183472    9866 logs.go:276] 0 containers: []
	W0503 15:25:27.183482    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:27.183534    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:27.194013    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:27.194031    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:27.194038    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:27.207855    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:27.207864    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:27.219927    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:27.219938    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:27.231786    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:27.231797    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:27.247165    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:27.247177    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:27.258813    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:27.258826    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:27.292776    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:27.292782    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:27.297902    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:27.297911    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:27.335223    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:27.335236    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:27.347223    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:27.347234    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:27.370820    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:27.370828    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:27.387248    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:27.387258    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:27.404712    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:27.404722    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:29.918252    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:34.920704    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:34.921146    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:34.961189    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:34.961315    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:34.983548    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:34.983657    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:34.998882    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:34.998953    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:35.011524    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:35.011582    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:35.022604    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:35.022672    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:35.033071    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:35.033140    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:35.043221    9866 logs.go:276] 0 containers: []
	W0503 15:25:35.043230    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:35.043278    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:35.053989    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:35.054005    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:35.054009    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:35.068381    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:35.068392    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:35.083071    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:35.083081    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:35.100109    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:35.100120    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:35.111348    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:35.111357    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:35.145257    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:35.145264    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:35.149857    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:35.149864    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:35.185210    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:35.185220    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:35.197524    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:35.197533    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:35.208497    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:35.208510    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:35.231572    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:35.231580    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:35.245141    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:35.245151    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:35.256663    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:35.256674    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:37.769968    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:42.772173    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:42.772448    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:42.803796    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:42.803911    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:42.822069    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:42.822151    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:42.835821    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:42.835894    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:42.847633    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:42.847695    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:42.857746    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:42.857814    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:42.868119    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:42.868182    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:42.878180    9866 logs.go:276] 0 containers: []
	W0503 15:25:42.878194    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:42.878248    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:42.888849    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:42.888869    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:42.888875    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:42.900429    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:42.900441    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:42.934105    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:42.934113    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:42.947924    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:42.947935    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:42.963203    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:42.963215    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:42.975012    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:42.975024    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:42.986681    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:42.986692    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:43.011311    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:43.011319    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:43.015474    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:43.015481    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:43.053824    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:43.053837    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:43.068674    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:43.068686    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:43.080039    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:43.080051    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:43.092144    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:43.092157    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:45.611830    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:50.613994    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:50.614220    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:50.631505    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:50.631577    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:50.648962    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:50.649039    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:50.659585    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:50.659652    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:50.670442    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:50.670500    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:50.681420    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:50.681483    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:50.695012    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:50.695076    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:50.707136    9866 logs.go:276] 0 containers: []
	W0503 15:25:50.707147    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:50.707197    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:50.719358    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:50.719374    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:50.719380    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:50.731369    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:50.731382    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:50.735846    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:50.735852    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:50.783499    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:50.783511    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:50.798188    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:50.798201    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:50.810498    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:50.810507    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:50.822414    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:50.822422    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:50.834116    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:50.834123    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:25:50.859170    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:50.859180    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:50.894255    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:50.894264    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:50.908676    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:50.908692    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:50.947839    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:50.947847    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:50.959648    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:50.959660    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:53.479244    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:25:58.480276    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:25:58.480759    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:25:58.522870    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:25:58.522975    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:25:58.541949    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:25:58.542032    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:25:58.559494    9866 logs.go:276] 2 containers: [04bbb9b6629a c9e00225a075]
	I0503 15:25:58.559570    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:25:58.571158    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:25:58.571225    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:25:58.581827    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:25:58.581891    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:25:58.592398    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:25:58.592464    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:25:58.602395    9866 logs.go:276] 0 containers: []
	W0503 15:25:58.602406    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:25:58.602453    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:25:58.612191    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:25:58.612207    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:25:58.612212    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:25:58.616431    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:25:58.616436    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:25:58.630797    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:25:58.630809    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:25:58.643470    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:25:58.643481    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:25:58.658809    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:25:58.658820    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:25:58.671301    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:25:58.671310    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:25:58.682364    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:25:58.682375    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:25:58.716321    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:25:58.716332    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:25:58.750772    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:25:58.750785    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:25:58.772270    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:25:58.772281    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:25:58.783649    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:25:58.783661    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:25:58.807235    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:25:58.807244    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:25:58.823453    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:25:58.823466    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:01.348666    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:06.349459    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:06.349527    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:06.361572    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:06.361633    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:06.374277    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:06.374337    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:06.385331    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:06.385385    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:06.396241    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:06.396316    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:06.407869    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:06.407924    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:06.420088    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:06.420144    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:06.430499    9866 logs.go:276] 0 containers: []
	W0503 15:26:06.430509    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:06.430554    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:06.441607    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:06.441626    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:06.441631    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:06.456525    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:06.456536    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:06.468516    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:06.468528    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:06.487389    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:06.487401    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:06.500470    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:06.500478    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:06.515630    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:06.515643    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:06.531686    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:06.531697    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:06.547908    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:06.547917    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:06.560034    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:06.560043    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:06.585488    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:06.585505    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:06.598276    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:06.598306    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:06.634080    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:06.634096    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:06.638993    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:06.639004    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:06.678110    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:06.678122    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:06.691440    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:06.691451    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:09.207471    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:14.210178    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:14.210606    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:14.250078    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:14.250208    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:14.274725    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:14.274836    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:14.294191    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:14.294267    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:14.305643    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:14.305715    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:14.316463    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:14.316524    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:14.327278    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:14.327334    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:14.341473    9866 logs.go:276] 0 containers: []
	W0503 15:26:14.341484    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:14.341541    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:14.351957    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:14.351977    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:14.351983    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:14.369078    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:14.369090    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:14.382720    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:14.382732    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:14.394758    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:14.394767    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:14.406438    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:14.406453    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:14.421984    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:14.422000    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:14.434101    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:14.434114    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:14.438838    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:14.438846    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:14.450391    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:14.450403    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:14.461994    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:14.462007    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:14.486041    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:14.486049    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:14.519018    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:14.519027    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:14.534848    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:14.534858    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:14.554238    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:14.554248    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:14.572113    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:14.572123    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:17.107410    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:22.110130    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:22.110567    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:22.151159    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:22.151285    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:22.173068    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:22.173172    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:22.187832    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:22.187907    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:22.199884    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:22.199952    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:22.211053    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:22.211122    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:22.221672    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:22.221738    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:22.232121    9866 logs.go:276] 0 containers: []
	W0503 15:26:22.232132    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:22.232185    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:22.241925    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:22.241947    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:22.241953    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:22.255456    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:22.255468    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:22.267171    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:22.267184    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:22.291844    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:22.291849    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:22.303851    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:22.303862    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:22.319227    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:22.319240    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:22.353564    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:22.353571    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:22.357623    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:22.357631    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:22.391504    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:22.391512    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:22.406202    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:22.406210    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:22.418061    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:22.418069    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:22.435714    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:22.435725    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:22.447034    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:22.447045    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:22.459045    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:22.459055    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:22.471086    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:22.471095    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:24.981412    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:29.977942    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:29.978327    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:30.010195    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:30.010322    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:30.032502    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:30.032579    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:30.046210    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:30.046279    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:30.057452    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:30.057518    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:30.068038    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:30.068109    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:30.078429    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:30.078494    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:30.089205    9866 logs.go:276] 0 containers: []
	W0503 15:26:30.089217    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:30.089270    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:30.099568    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:30.099585    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:30.099591    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:30.134299    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:30.134312    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:30.148422    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:30.148434    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:30.159477    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:30.159488    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:30.170902    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:30.170915    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:30.186229    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:30.186242    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:30.205397    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:30.205408    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:30.222978    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:30.222990    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:30.236194    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:30.236205    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:30.269822    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:30.269829    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:30.283839    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:30.283853    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:30.288139    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:30.288146    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:30.302371    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:30.302382    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:30.313823    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:30.313834    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:30.337484    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:30.337492    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:32.849229    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:37.847776    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:37.848000    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:37.867371    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:37.867463    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:37.881755    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:37.881826    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:37.893952    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:37.894023    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:37.905027    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:37.905095    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:37.915077    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:37.915143    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:37.931297    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:37.931364    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:37.943741    9866 logs.go:276] 0 containers: []
	W0503 15:26:37.943754    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:37.943820    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:37.954711    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:37.954728    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:37.954734    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:37.988467    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:37.988480    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:38.000328    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:38.000339    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:38.012113    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:38.012125    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:38.030073    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:38.030084    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:38.055017    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:38.055025    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:38.087676    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:38.087684    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:38.099468    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:38.099479    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:38.111177    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:38.111193    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:38.115594    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:38.115603    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:38.130873    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:38.130886    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:38.147882    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:38.147893    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:38.161785    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:38.161799    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:38.183002    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:38.183014    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:38.194820    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:38.194831    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:40.706993    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:45.706988    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:45.707193    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:45.727528    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:45.727611    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:45.741676    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:45.741744    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:45.757406    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:45.757471    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:45.767929    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:45.767997    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:45.778044    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:45.778108    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:45.788560    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:45.788623    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:45.799081    9866 logs.go:276] 0 containers: []
	W0503 15:26:45.799093    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:45.799143    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:45.809793    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:45.809811    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:45.809816    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:45.843255    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:45.843264    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:45.865458    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:45.865470    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:45.883841    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:45.883854    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:45.895811    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:45.895820    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:45.910317    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:45.910328    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:45.922263    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:45.922275    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:45.939130    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:45.939141    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:45.943716    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:45.943725    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:45.967470    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:45.967481    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:46.012448    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:46.012459    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:46.026732    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:46.026744    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:46.038226    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:46.038237    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:46.049426    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:46.049436    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:46.061124    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:46.061135    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:48.580973    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:26:53.582519    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:26:53.582906    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:26:53.621582    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:26:53.621696    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:26:53.641170    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:26:53.641257    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:26:53.655656    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:26:53.655732    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:26:53.671145    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:26:53.671209    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:26:53.681582    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:26:53.681637    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:26:53.694992    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:26:53.695053    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:26:53.705588    9866 logs.go:276] 0 containers: []
	W0503 15:26:53.705600    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:26:53.705652    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:26:53.716811    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:26:53.716826    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:26:53.716832    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:26:53.730682    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:26:53.730691    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:26:53.742670    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:26:53.742678    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:26:53.754067    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:26:53.754077    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:26:53.770990    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:26:53.771010    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:26:53.782643    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:26:53.782651    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:26:53.816748    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:26:53.816755    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:26:53.821285    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:26:53.821291    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:26:53.859699    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:26:53.859709    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:26:53.873804    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:26:53.873814    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:26:53.886177    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:26:53.886186    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:26:53.901106    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:26:53.901115    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:26:53.912707    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:26:53.912715    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:26:53.923984    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:26:53.923995    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:26:53.936045    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:26:53.936054    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:26:56.461016    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:01.462289    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:01.462469    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:27:01.478122    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:27:01.478198    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:27:01.490653    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:27:01.490718    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:27:01.501783    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:27:01.501852    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:27:01.514387    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:27:01.514448    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:27:01.526286    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:27:01.526351    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:27:01.536468    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:27:01.536531    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:27:01.546857    9866 logs.go:276] 0 containers: []
	W0503 15:27:01.546869    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:27:01.546918    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:27:01.556881    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:27:01.556898    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:27:01.556903    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:27:01.570370    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:27:01.570383    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:27:01.581861    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:27:01.581874    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:27:01.597024    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:27:01.597033    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:27:01.608394    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:27:01.608407    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:27:01.620433    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:27:01.620444    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:27:01.654067    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:27:01.654075    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:27:01.658271    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:27:01.658280    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:27:01.672941    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:27:01.672954    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:27:01.689893    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:27:01.689906    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:27:01.706718    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:27:01.706729    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:27:01.731546    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:27:01.731557    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:27:01.767839    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:27:01.767853    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:27:01.779789    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:27:01.779801    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:27:01.791679    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:27:01.791692    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:27:04.311129    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:09.313217    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:09.313319    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:27:09.324438    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:27:09.324498    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:27:09.335056    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:27:09.335107    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:27:09.346696    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:27:09.346769    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:27:09.363941    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:27:09.364013    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:27:09.376133    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:27:09.376204    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:27:09.388139    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:27:09.388209    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:27:09.399915    9866 logs.go:276] 0 containers: []
	W0503 15:27:09.399930    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:27:09.399989    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:27:09.412083    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:27:09.412101    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:27:09.412106    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:27:09.428054    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:27:09.428067    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:27:09.441346    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:27:09.441361    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:27:09.455370    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:27:09.455383    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:27:09.481796    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:27:09.481817    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:27:09.495148    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:27:09.495159    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:27:09.499662    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:27:09.499673    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:27:09.511988    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:27:09.511998    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:27:09.528637    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:27:09.528648    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:27:09.540326    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:27:09.540336    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:27:09.574843    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:27:09.574851    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:27:09.612500    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:27:09.612512    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:27:09.627338    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:27:09.627345    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:27:09.639084    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:27:09.639098    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:27:09.656513    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:27:09.656523    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:27:12.169710    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:17.171727    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:17.171856    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:27:17.188836    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:27:17.188924    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:27:17.202219    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:27:17.202285    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:27:17.213890    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:27:17.213957    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:27:17.224247    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:27:17.224317    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:27:17.234398    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:27:17.234460    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:27:17.244753    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:27:17.244826    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:27:17.254679    9866 logs.go:276] 0 containers: []
	W0503 15:27:17.254690    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:27:17.254735    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:27:17.264762    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:27:17.264784    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:27:17.264789    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:27:17.276062    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:27:17.276074    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:27:17.287338    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:27:17.287350    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:27:17.305131    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:27:17.305144    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:27:17.316967    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:27:17.316981    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:27:17.332172    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:27:17.332181    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:27:17.343400    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:27:17.343411    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:27:17.347577    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:27:17.347585    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:27:17.370816    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:27:17.370831    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:27:17.389223    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:27:17.389232    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:27:17.423577    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:27:17.423586    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:27:17.437004    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:27:17.437012    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:27:17.448540    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:27:17.448552    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:27:17.460211    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:27:17.460222    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:27:17.495213    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:27:17.495225    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:27:20.012151    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:25.014086    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:25.014531    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:27:25.055376    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:27:25.055536    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:27:25.078055    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:27:25.078165    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:27:25.093549    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:27:25.093613    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:27:25.111073    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:27:25.111149    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:27:25.124285    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:27:25.124344    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:27:25.135740    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:27:25.135806    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:27:25.146003    9866 logs.go:276] 0 containers: []
	W0503 15:27:25.146019    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:27:25.146076    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:27:25.157300    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:27:25.157319    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:27:25.157324    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:27:25.169117    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:27:25.169130    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:27:25.187804    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:27:25.187816    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:27:25.202985    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:27:25.202996    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:27:25.214929    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:27:25.214937    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:27:25.219058    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:27:25.219064    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:27:25.233604    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:27:25.233614    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:27:25.251393    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:27:25.251401    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:27:25.275609    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:27:25.275616    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:27:25.286721    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:27:25.286733    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:27:25.320477    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:27:25.320483    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:27:25.356725    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:27:25.356738    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:27:25.368987    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:27:25.368999    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:27:25.380615    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:27:25.380627    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:27:25.394787    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:27:25.394799    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:27:27.908275    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:32.910454    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:32.910557    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:27:32.922411    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:27:32.922474    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:27:32.934273    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:27:32.934365    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:27:32.946733    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:27:32.946873    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:27:32.968306    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:27:32.968388    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:27:32.979071    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:27:32.979132    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:27:32.991073    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:27:32.991136    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:27:33.003008    9866 logs.go:276] 0 containers: []
	W0503 15:27:33.003023    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:27:33.003088    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:27:33.014582    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:27:33.014597    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:27:33.014602    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:27:33.026622    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:27:33.026633    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:27:33.039848    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:27:33.039858    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:27:33.056617    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:27:33.056631    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:27:33.069072    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:27:33.069082    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:27:33.073733    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:27:33.073743    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:27:33.112014    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:27:33.112025    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:27:33.127033    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:27:33.127047    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:27:33.153390    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:27:33.153401    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:27:33.164857    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:27:33.164868    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:27:33.178885    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:27:33.178898    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:27:33.213949    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:27:33.213961    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:27:33.229303    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:27:33.229315    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:27:33.248634    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:27:33.248642    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:27:33.259899    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:27:33.259907    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:27:35.785722    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:40.786684    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:40.787110    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0503 15:27:40.823895    9866 logs.go:276] 1 containers: [c8657393748b]
	I0503 15:27:40.824026    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0503 15:27:40.845969    9866 logs.go:276] 1 containers: [1a0896cfe495]
	I0503 15:27:40.846073    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0503 15:27:40.861885    9866 logs.go:276] 4 containers: [c6e82b3c3afb c7b8230afccc 04bbb9b6629a c9e00225a075]
	I0503 15:27:40.861959    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0503 15:27:40.874435    9866 logs.go:276] 1 containers: [369a0695bf69]
	I0503 15:27:40.874491    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0503 15:27:40.898915    9866 logs.go:276] 1 containers: [828bdbf057fa]
	I0503 15:27:40.898986    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0503 15:27:40.909729    9866 logs.go:276] 1 containers: [873609a8047c]
	I0503 15:27:40.909796    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0503 15:27:40.920614    9866 logs.go:276] 0 containers: []
	W0503 15:27:40.920626    9866 logs.go:278] No container was found matching "kindnet"
	I0503 15:27:40.920680    9866 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0503 15:27:40.931258    9866 logs.go:276] 1 containers: [12a0ca49b3ed]
	I0503 15:27:40.931275    9866 logs.go:123] Gathering logs for etcd [1a0896cfe495] ...
	I0503 15:27:40.931281    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a0896cfe495"
	I0503 15:27:40.945126    9866 logs.go:123] Gathering logs for coredns [c7b8230afccc] ...
	I0503 15:27:40.945139    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7b8230afccc"
	I0503 15:27:40.957072    9866 logs.go:123] Gathering logs for storage-provisioner [12a0ca49b3ed] ...
	I0503 15:27:40.957085    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12a0ca49b3ed"
	I0503 15:27:40.971895    9866 logs.go:123] Gathering logs for kube-apiserver [c8657393748b] ...
	I0503 15:27:40.971909    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8657393748b"
	I0503 15:27:40.987302    9866 logs.go:123] Gathering logs for Docker ...
	I0503 15:27:40.987311    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0503 15:27:41.009934    9866 logs.go:123] Gathering logs for kubelet ...
	I0503 15:27:41.009940    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0503 15:27:41.041906    9866 logs.go:123] Gathering logs for describe nodes ...
	I0503 15:27:41.041912    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0503 15:27:41.077542    9866 logs.go:123] Gathering logs for coredns [c9e00225a075] ...
	I0503 15:27:41.077551    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9e00225a075"
	I0503 15:27:41.089150    9866 logs.go:123] Gathering logs for kube-scheduler [369a0695bf69] ...
	I0503 15:27:41.089163    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 369a0695bf69"
	I0503 15:27:41.114719    9866 logs.go:123] Gathering logs for kube-proxy [828bdbf057fa] ...
	I0503 15:27:41.114729    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828bdbf057fa"
	I0503 15:27:41.126304    9866 logs.go:123] Gathering logs for kube-controller-manager [873609a8047c] ...
	I0503 15:27:41.126316    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 873609a8047c"
	I0503 15:27:41.143930    9866 logs.go:123] Gathering logs for dmesg ...
	I0503 15:27:41.143942    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0503 15:27:41.148250    9866 logs.go:123] Gathering logs for coredns [c6e82b3c3afb] ...
	I0503 15:27:41.148258    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e82b3c3afb"
	I0503 15:27:41.159654    9866 logs.go:123] Gathering logs for coredns [04bbb9b6629a] ...
	I0503 15:27:41.164939    9866 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04bbb9b6629a"
	I0503 15:27:41.176434    9866 logs.go:123] Gathering logs for container status ...
	I0503 15:27:41.176446    9866 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0503 15:27:43.688703    9866 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0503 15:27:48.690932    9866 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0503 15:27:48.696487    9866 out.go:177] 
	W0503 15:27:48.700308    9866 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0503 15:27:48.700328    9866 out.go:239] * 
	* 
	W0503 15:27:48.701868    9866 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:27:48.711312    9866 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-139000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (576.51s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-422000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-422000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.814100708s)

                                                
                                                
-- stdout --
	* [pause-422000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-422000" primary control-plane node in "pause-422000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-422000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-422000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-422000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-422000 -n pause-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-422000 -n pause-422000: exit status 7 (36.823916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-626000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-626000 --driver=qemu2 : exit status 80 (9.81931275s)

                                                
                                                
-- stdout --
	* [NoKubernetes-626000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-626000" primary control-plane node in "NoKubernetes-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-626000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000: exit status 7 (37.392375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247217917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-626000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-626000
	* Restarting existing qemu2 VM for "NoKubernetes-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000: exit status 7 (72.561333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --driver=qemu2 : exit status 80 (5.252991s)

                                                
                                                
-- stdout --
	* [NoKubernetes-626000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-626000
	* Restarting existing qemu2 VM for "NoKubernetes-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000: exit status 7 (62.659583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-626000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-626000 --driver=qemu2 : exit status 80 (5.262774208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-626000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-626000
	* Restarting existing qemu2 VM for "NoKubernetes-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-626000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-626000 -n NoKubernetes-626000: exit status 7 (65.804375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.784084167s)

                                                
                                                
-- stdout --
	* [auto-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-874000" primary control-plane node in "auto-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:25:56.260862   10543 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:25:56.261002   10543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:25:56.261006   10543 out.go:304] Setting ErrFile to fd 2...
	I0503 15:25:56.261008   10543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:25:56.261138   10543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:25:56.262209   10543 out.go:298] Setting JSON to false
	I0503 15:25:56.278402   10543 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5127,"bootTime":1714770029,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:25:56.278477   10543 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:25:56.283911   10543 out.go:177] * [auto-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:25:56.291836   10543 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:25:56.291900   10543 notify.go:220] Checking for updates...
	I0503 15:25:56.299795   10543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:25:56.302874   10543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:25:56.305859   10543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:25:56.307365   10543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:25:56.310891   10543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:25:56.314215   10543 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:25:56.314283   10543 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:25:56.314328   10543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:25:56.317897   10543 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:25:56.324873   10543 start.go:297] selected driver: qemu2
	I0503 15:25:56.324878   10543 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:25:56.324883   10543 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:25:56.327091   10543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:25:56.328636   10543 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:25:56.331896   10543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:25:56.331933   10543 cni.go:84] Creating CNI manager for ""
	I0503 15:25:56.331942   10543 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:25:56.331945   10543 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:25:56.331985   10543 start.go:340] cluster config:
	{Name:auto-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:25:56.336346   10543 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:25:56.343809   10543 out.go:177] * Starting "auto-874000" primary control-plane node in "auto-874000" cluster
	I0503 15:25:56.347850   10543 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:25:56.347863   10543 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:25:56.347869   10543 cache.go:56] Caching tarball of preloaded images
	I0503 15:25:56.347920   10543 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:25:56.347924   10543 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:25:56.347980   10543 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/auto-874000/config.json ...
	I0503 15:25:56.347990   10543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/auto-874000/config.json: {Name:mkc8558319283c5812fa7f0416989b418bbc9ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:25:56.348355   10543 start.go:360] acquireMachinesLock for auto-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:25:56.348387   10543 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "auto-874000"
	I0503 15:25:56.348397   10543 start.go:93] Provisioning new machine with config: &{Name:auto-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:25:56.348429   10543 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:25:56.352825   10543 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:25:56.368300   10543 start.go:159] libmachine.API.Create for "auto-874000" (driver="qemu2")
	I0503 15:25:56.368320   10543 client.go:168] LocalClient.Create starting
	I0503 15:25:56.368381   10543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:25:56.368411   10543 main.go:141] libmachine: Decoding PEM data...
	I0503 15:25:56.368420   10543 main.go:141] libmachine: Parsing certificate...
	I0503 15:25:56.368455   10543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:25:56.368478   10543 main.go:141] libmachine: Decoding PEM data...
	I0503 15:25:56.368483   10543 main.go:141] libmachine: Parsing certificate...
	I0503 15:25:56.368824   10543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:25:56.512361   10543 main.go:141] libmachine: Creating SSH key...
	I0503 15:25:56.621044   10543 main.go:141] libmachine: Creating Disk image...
	I0503 15:25:56.621051   10543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:25:56.621227   10543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2
	I0503 15:25:56.634524   10543 main.go:141] libmachine: STDOUT: 
	I0503 15:25:56.634550   10543 main.go:141] libmachine: STDERR: 
	I0503 15:25:56.634614   10543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2 +20000M
	I0503 15:25:56.645903   10543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:25:56.645922   10543 main.go:141] libmachine: STDERR: 
	I0503 15:25:56.645935   10543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2
	I0503 15:25:56.645939   10543 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:25:56.645968   10543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:3f:49:19:b2:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2
	I0503 15:25:56.647720   10543 main.go:141] libmachine: STDOUT: 
	I0503 15:25:56.647735   10543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:25:56.647754   10543 client.go:171] duration metric: took 279.437625ms to LocalClient.Create
	I0503 15:25:58.648472   10543 start.go:128] duration metric: took 2.300090875s to createHost
	I0503 15:25:58.648489   10543 start.go:83] releasing machines lock for "auto-874000", held for 2.300150208s
	W0503 15:25:58.648503   10543 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:25:58.653042   10543 out.go:177] * Deleting "auto-874000" in qemu2 ...
	W0503 15:25:58.662733   10543 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:25:58.662753   10543 start.go:728] Will try again in 5 seconds ...
	I0503 15:26:03.664922   10543 start.go:360] acquireMachinesLock for auto-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:03.665476   10543 start.go:364] duration metric: took 434.375µs to acquireMachinesLock for "auto-874000"
	I0503 15:26:03.665632   10543 start.go:93] Provisioning new machine with config: &{Name:auto-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:auto-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:03.665939   10543 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:03.671442   10543 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:03.714436   10543 start.go:159] libmachine.API.Create for "auto-874000" (driver="qemu2")
	I0503 15:26:03.714569   10543 client.go:168] LocalClient.Create starting
	I0503 15:26:03.714683   10543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:03.714751   10543 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:03.714765   10543 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:03.714817   10543 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:03.714861   10543 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:03.714872   10543 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:03.715391   10543 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:03.868295   10543 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:03.942682   10543 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:03.942693   10543 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:03.942890   10543 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2
	I0503 15:26:03.955802   10543 main.go:141] libmachine: STDOUT: 
	I0503 15:26:03.955820   10543 main.go:141] libmachine: STDERR: 
	I0503 15:26:03.955869   10543 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2 +20000M
	I0503 15:26:03.967309   10543 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:03.967336   10543 main.go:141] libmachine: STDERR: 
	I0503 15:26:03.967368   10543 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2
	I0503 15:26:03.967375   10543 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:03.967419   10543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:47:bd:4b:29:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/auto-874000/disk.qcow2
	I0503 15:26:03.969276   10543 main.go:141] libmachine: STDOUT: 
	I0503 15:26:03.969294   10543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:03.969316   10543 client.go:171] duration metric: took 254.74575ms to LocalClient.Create
	I0503 15:26:05.971366   10543 start.go:128] duration metric: took 2.305459917s to createHost
	I0503 15:26:05.971401   10543 start.go:83] releasing machines lock for "auto-874000", held for 2.30595925s
	W0503 15:26:05.971554   10543 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:05.987563   10543 out.go:177] 
	W0503 15:26:05.990494   10543 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:26:05.990504   10543 out.go:239] * 
	* 
	W0503 15:26:05.991373   10543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:26:06.005478   10543 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.976640166s)

                                                
                                                
-- stdout --
	* [flannel-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-874000" primary control-plane node in "flannel-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:26:08.339795   10673 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:26:08.339923   10673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:08.339927   10673 out.go:304] Setting ErrFile to fd 2...
	I0503 15:26:08.339929   10673 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:08.340052   10673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:26:08.341130   10673 out.go:298] Setting JSON to false
	I0503 15:26:08.357693   10673 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5139,"bootTime":1714770029,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:26:08.357763   10673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:26:08.363824   10673 out.go:177] * [flannel-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:26:08.371971   10673 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:26:08.371988   10673 notify.go:220] Checking for updates...
	I0503 15:26:08.375973   10673 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:26:08.378937   10673 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:26:08.381990   10673 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:26:08.384953   10673 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:26:08.387909   10673 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:26:08.391266   10673 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:26:08.391332   10673 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:26:08.391376   10673 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:26:08.394936   10673 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:26:08.401948   10673 start.go:297] selected driver: qemu2
	I0503 15:26:08.401954   10673 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:26:08.401960   10673 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:26:08.404156   10673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:26:08.406868   10673 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:26:08.411130   10673 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:26:08.411168   10673 cni.go:84] Creating CNI manager for "flannel"
	I0503 15:26:08.411172   10673 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0503 15:26:08.411208   10673 start.go:340] cluster config:
	{Name:flannel-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:26:08.415567   10673 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:26:08.422947   10673 out.go:177] * Starting "flannel-874000" primary control-plane node in "flannel-874000" cluster
	I0503 15:26:08.426877   10673 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:26:08.426893   10673 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:26:08.426897   10673 cache.go:56] Caching tarball of preloaded images
	I0503 15:26:08.426957   10673 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:26:08.426962   10673 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:26:08.427011   10673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/flannel-874000/config.json ...
	I0503 15:26:08.427021   10673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/flannel-874000/config.json: {Name:mkcb115849269979b272916a81fe01eb25a36876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:26:08.427252   10673 start.go:360] acquireMachinesLock for flannel-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:08.427283   10673 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "flannel-874000"
	I0503 15:26:08.427294   10673 start.go:93] Provisioning new machine with config: &{Name:flannel-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:08.427330   10673 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:08.435983   10673 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:08.452707   10673 start.go:159] libmachine.API.Create for "flannel-874000" (driver="qemu2")
	I0503 15:26:08.452736   10673 client.go:168] LocalClient.Create starting
	I0503 15:26:08.452797   10673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:08.452827   10673 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:08.452841   10673 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:08.452880   10673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:08.452912   10673 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:08.452918   10673 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:08.453273   10673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:08.596938   10673 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:08.839668   10673 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:08.839680   10673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:08.839867   10673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2
	I0503 15:26:08.852759   10673 main.go:141] libmachine: STDOUT: 
	I0503 15:26:08.852789   10673 main.go:141] libmachine: STDERR: 
	I0503 15:26:08.852856   10673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2 +20000M
	I0503 15:26:08.864378   10673 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:08.864399   10673 main.go:141] libmachine: STDERR: 
	I0503 15:26:08.864418   10673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2
	I0503 15:26:08.864424   10673 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:08.864465   10673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:9a:81:cf:79:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2
	I0503 15:26:08.866182   10673 main.go:141] libmachine: STDOUT: 
	I0503 15:26:08.866199   10673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:08.866223   10673 client.go:171] duration metric: took 413.491041ms to LocalClient.Create
	I0503 15:26:10.868384   10673 start.go:128] duration metric: took 2.441080292s to createHost
	I0503 15:26:10.868482   10673 start.go:83] releasing machines lock for "flannel-874000", held for 2.4412455s
	W0503 15:26:10.868594   10673 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:10.883835   10673 out.go:177] * Deleting "flannel-874000" in qemu2 ...
	W0503 15:26:10.908073   10673 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:10.908097   10673 start.go:728] Will try again in 5 seconds ...
	I0503 15:26:15.910161   10673 start.go:360] acquireMachinesLock for flannel-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:15.910273   10673 start.go:364] duration metric: took 73.042µs to acquireMachinesLock for "flannel-874000"
	I0503 15:26:15.910288   10673 start.go:93] Provisioning new machine with config: &{Name:flannel-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:flannel-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:15.910337   10673 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:15.918517   10673 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:15.933795   10673 start.go:159] libmachine.API.Create for "flannel-874000" (driver="qemu2")
	I0503 15:26:15.933816   10673 client.go:168] LocalClient.Create starting
	I0503 15:26:15.933871   10673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:15.933902   10673 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:15.933910   10673 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:15.933953   10673 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:15.933975   10673 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:15.933981   10673 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:15.934266   10673 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:16.077628   10673 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:16.212659   10673 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:16.212666   10673 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:16.213065   10673 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2
	I0503 15:26:16.225914   10673 main.go:141] libmachine: STDOUT: 
	I0503 15:26:16.225936   10673 main.go:141] libmachine: STDERR: 
	I0503 15:26:16.225998   10673 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2 +20000M
	I0503 15:26:16.237119   10673 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:16.237137   10673 main.go:141] libmachine: STDERR: 
	I0503 15:26:16.237152   10673 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2
	I0503 15:26:16.237156   10673 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:16.237193   10673 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:76:9f:ac:e8:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/flannel-874000/disk.qcow2
	I0503 15:26:16.238923   10673 main.go:141] libmachine: STDOUT: 
	I0503 15:26:16.238938   10673 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:16.238953   10673 client.go:171] duration metric: took 305.140541ms to LocalClient.Create
	I0503 15:26:18.241215   10673 start.go:128] duration metric: took 2.330865375s to createHost
	I0503 15:26:18.241345   10673 start.go:83] releasing machines lock for "flannel-874000", held for 2.331076458s
	W0503 15:26:18.241609   10673 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:18.255268   10673 out.go:177] 
	W0503 15:26:18.260198   10673 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:26:18.260255   10673 out.go:239] * 
	* 
	W0503 15:26:18.262801   10673 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:26:18.271223   10673 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.861921917s)

                                                
                                                
-- stdout --
	* [kindnet-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-874000" primary control-plane node in "kindnet-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:26:20.778190   10803 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:26:20.778342   10803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:20.778348   10803 out.go:304] Setting ErrFile to fd 2...
	I0503 15:26:20.778351   10803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:20.778464   10803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:26:20.779539   10803 out.go:298] Setting JSON to false
	I0503 15:26:20.795881   10803 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5151,"bootTime":1714770029,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:26:20.795952   10803 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:26:20.800882   10803 out.go:177] * [kindnet-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:26:20.808945   10803 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:26:20.812956   10803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:26:20.808984   10803 notify.go:220] Checking for updates...
	I0503 15:26:20.815892   10803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:26:20.818953   10803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:26:20.821899   10803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:26:20.837885   10803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:26:20.841363   10803 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:26:20.841434   10803 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:26:20.841483   10803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:26:20.845863   10803 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:26:20.852898   10803 start.go:297] selected driver: qemu2
	I0503 15:26:20.852904   10803 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:26:20.852908   10803 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:26:20.855122   10803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:26:20.857812   10803 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:26:20.860930   10803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:26:20.860957   10803 cni.go:84] Creating CNI manager for "kindnet"
	I0503 15:26:20.860960   10803 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0503 15:26:20.860987   10803 start.go:340] cluster config:
	{Name:kindnet-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:26:20.865172   10803 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:26:20.871861   10803 out.go:177] * Starting "kindnet-874000" primary control-plane node in "kindnet-874000" cluster
	I0503 15:26:20.875900   10803 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:26:20.875914   10803 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:26:20.875924   10803 cache.go:56] Caching tarball of preloaded images
	I0503 15:26:20.875986   10803 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:26:20.875991   10803 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:26:20.876049   10803 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kindnet-874000/config.json ...
	I0503 15:26:20.876059   10803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kindnet-874000/config.json: {Name:mkd63de59b1389570002533eea5bd0271ad9339c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:26:20.876439   10803 start.go:360] acquireMachinesLock for kindnet-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:20.876467   10803 start.go:364] duration metric: took 22.5µs to acquireMachinesLock for "kindnet-874000"
	I0503 15:26:20.876476   10803 start.go:93] Provisioning new machine with config: &{Name:kindnet-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:20.876505   10803 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:20.880907   10803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:20.895422   10803 start.go:159] libmachine.API.Create for "kindnet-874000" (driver="qemu2")
	I0503 15:26:20.895443   10803 client.go:168] LocalClient.Create starting
	I0503 15:26:20.895505   10803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:20.895538   10803 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:20.895552   10803 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:20.895595   10803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:20.895616   10803 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:20.895622   10803 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:20.896127   10803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:21.039392   10803 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:21.166497   10803 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:21.166503   10803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:21.166688   10803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2
	I0503 15:26:21.179446   10803 main.go:141] libmachine: STDOUT: 
	I0503 15:26:21.179463   10803 main.go:141] libmachine: STDERR: 
	I0503 15:26:21.179513   10803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2 +20000M
	I0503 15:26:21.190993   10803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:21.191023   10803 main.go:141] libmachine: STDERR: 
	I0503 15:26:21.191044   10803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2
	I0503 15:26:21.191050   10803 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:21.191095   10803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:0b:7d:c5:a8:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2
	I0503 15:26:21.192908   10803 main.go:141] libmachine: STDOUT: 
	I0503 15:26:21.192925   10803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:21.192944   10803 client.go:171] duration metric: took 297.5045ms to LocalClient.Create
	I0503 15:26:23.194145   10803 start.go:128] duration metric: took 2.318631583s to createHost
	I0503 15:26:23.194345   10803 start.go:83] releasing machines lock for "kindnet-874000", held for 2.318849208s
	W0503 15:26:23.194475   10803 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:23.206918   10803 out.go:177] * Deleting "kindnet-874000" in qemu2 ...
	W0503 15:26:23.234126   10803 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:23.234157   10803 start.go:728] Will try again in 5 seconds ...
	I0503 15:26:28.229788   10803 start.go:360] acquireMachinesLock for kindnet-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:28.230309   10803 start.go:364] duration metric: took 415.791µs to acquireMachinesLock for "kindnet-874000"
	I0503 15:26:28.230433   10803 start.go:93] Provisioning new machine with config: &{Name:kindnet-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kindnet-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:28.230622   10803 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:28.239253   10803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:28.283671   10803 start.go:159] libmachine.API.Create for "kindnet-874000" (driver="qemu2")
	I0503 15:26:28.283714   10803 client.go:168] LocalClient.Create starting
	I0503 15:26:28.283839   10803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:28.283914   10803 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:28.283929   10803 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:28.283992   10803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:28.284032   10803 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:28.284044   10803 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:28.284557   10803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:28.436932   10803 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:28.529279   10803 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:28.529297   10803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:28.529501   10803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2
	I0503 15:26:28.542327   10803 main.go:141] libmachine: STDOUT: 
	I0503 15:26:28.542348   10803 main.go:141] libmachine: STDERR: 
	I0503 15:26:28.542407   10803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2 +20000M
	I0503 15:26:28.553737   10803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:28.553792   10803 main.go:141] libmachine: STDERR: 
	I0503 15:26:28.553803   10803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2
	I0503 15:26:28.553806   10803 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:28.553865   10803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:8d:c4:34:2f:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kindnet-874000/disk.qcow2
	I0503 15:26:28.555626   10803 main.go:141] libmachine: STDOUT: 
	I0503 15:26:28.555672   10803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:28.555683   10803 client.go:171] duration metric: took 272.274958ms to LocalClient.Create
	I0503 15:26:30.553797   10803 start.go:128] duration metric: took 2.325580458s to createHost
	I0503 15:26:30.553839   10803 start.go:83] releasing machines lock for "kindnet-874000", held for 2.325956875s
	W0503 15:26:30.554022   10803 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:30.571342   10803 out.go:177] 
	W0503 15:26:30.575289   10803 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:26:30.575298   10803 out.go:239] * 
	* 
	W0503 15:26:30.576244   10803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:26:30.590247   10803 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.861169292s)

                                                
                                                
-- stdout --
	* [enable-default-cni-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-874000" primary control-plane node in "enable-default-cni-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:26:33.000928   10934 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:26:33.001063   10934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:33.001066   10934 out.go:304] Setting ErrFile to fd 2...
	I0503 15:26:33.001069   10934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:33.001197   10934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:26:33.002276   10934 out.go:298] Setting JSON to false
	I0503 15:26:33.018513   10934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5164,"bootTime":1714770029,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:26:33.018588   10934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:26:33.024514   10934 out.go:177] * [enable-default-cni-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:26:33.031509   10934 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:26:33.031514   10934 notify.go:220] Checking for updates...
	I0503 15:26:33.039454   10934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:26:33.042461   10934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:26:33.045416   10934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:26:33.048473   10934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:26:33.049660   10934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:26:33.052829   10934 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:26:33.052898   10934 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:26:33.052952   10934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:26:33.057438   10934 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:26:33.063452   10934 start.go:297] selected driver: qemu2
	I0503 15:26:33.063461   10934 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:26:33.063468   10934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:26:33.065687   10934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:26:33.069396   10934 out.go:177] * Automatically selected the socket_vmnet network
	E0503 15:26:33.072593   10934 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0503 15:26:33.072604   10934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:26:33.072644   10934 cni.go:84] Creating CNI manager for "bridge"
	I0503 15:26:33.072648   10934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:26:33.072686   10934 start.go:340] cluster config:
	{Name:enable-default-cni-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:26:33.077139   10934 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:26:33.084458   10934 out.go:177] * Starting "enable-default-cni-874000" primary control-plane node in "enable-default-cni-874000" cluster
	I0503 15:26:33.088423   10934 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:26:33.088440   10934 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:26:33.088450   10934 cache.go:56] Caching tarball of preloaded images
	I0503 15:26:33.088507   10934 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:26:33.088512   10934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:26:33.088575   10934 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/enable-default-cni-874000/config.json ...
	I0503 15:26:33.088585   10934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/enable-default-cni-874000/config.json: {Name:mkd04be79cb834d0665a6d012c0718aaee5e6ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:26:33.088968   10934 start.go:360] acquireMachinesLock for enable-default-cni-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:33.088998   10934 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "enable-default-cni-874000"
	I0503 15:26:33.089009   10934 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:33.089045   10934 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:33.093438   10934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:33.109057   10934 start.go:159] libmachine.API.Create for "enable-default-cni-874000" (driver="qemu2")
	I0503 15:26:33.109078   10934 client.go:168] LocalClient.Create starting
	I0503 15:26:33.109140   10934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:33.109169   10934 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:33.109178   10934 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:33.109219   10934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:33.109241   10934 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:33.109247   10934 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:33.109587   10934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:33.251755   10934 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:33.359269   10934 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:33.359276   10934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:33.359454   10934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2
	I0503 15:26:33.372118   10934 main.go:141] libmachine: STDOUT: 
	I0503 15:26:33.372147   10934 main.go:141] libmachine: STDERR: 
	I0503 15:26:33.372210   10934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2 +20000M
	I0503 15:26:33.383597   10934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:33.383619   10934 main.go:141] libmachine: STDERR: 
	I0503 15:26:33.383639   10934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2
	I0503 15:26:33.383644   10934 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:33.383670   10934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:60:2e:d7:fa:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2
	I0503 15:26:33.385462   10934 main.go:141] libmachine: STDOUT: 
	I0503 15:26:33.385477   10934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:33.385503   10934 client.go:171] duration metric: took 276.6505ms to LocalClient.Create
	I0503 15:26:35.386177   10934 start.go:128] duration metric: took 2.298890375s to createHost
	I0503 15:26:35.386252   10934 start.go:83] releasing machines lock for "enable-default-cni-874000", held for 2.299027333s
	W0503 15:26:35.386366   10934 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:35.401615   10934 out.go:177] * Deleting "enable-default-cni-874000" in qemu2 ...
	W0503 15:26:35.428889   10934 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:35.429076   10934 start.go:728] Will try again in 5 seconds ...
	I0503 15:26:40.428246   10934 start.go:360] acquireMachinesLock for enable-default-cni-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:40.428764   10934 start.go:364] duration metric: took 402.542µs to acquireMachinesLock for "enable-default-cni-874000"
	I0503 15:26:40.428835   10934 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:40.429175   10934 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:40.437853   10934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:40.488598   10934 start.go:159] libmachine.API.Create for "enable-default-cni-874000" (driver="qemu2")
	I0503 15:26:40.488643   10934 client.go:168] LocalClient.Create starting
	I0503 15:26:40.488762   10934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:40.488841   10934 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:40.488860   10934 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:40.488928   10934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:40.488980   10934 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:40.488998   10934 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:40.489562   10934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:40.642635   10934 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:40.758568   10934 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:40.758579   10934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:40.758755   10934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2
	I0503 15:26:40.771274   10934 main.go:141] libmachine: STDOUT: 
	I0503 15:26:40.771301   10934 main.go:141] libmachine: STDERR: 
	I0503 15:26:40.771355   10934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2 +20000M
	I0503 15:26:40.782682   10934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:40.782699   10934 main.go:141] libmachine: STDERR: 
	I0503 15:26:40.782713   10934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2
	I0503 15:26:40.782720   10934 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:40.782765   10934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:13:b4:6c:38:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/enable-default-cni-874000/disk.qcow2
	I0503 15:26:40.784535   10934 main.go:141] libmachine: STDOUT: 
	I0503 15:26:40.784550   10934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:40.784561   10934 client.go:171] duration metric: took 296.066333ms to LocalClient.Create
	I0503 15:26:42.785800   10934 start.go:128] duration metric: took 2.357743125s to createHost
	I0503 15:26:42.785881   10934 start.go:83] releasing machines lock for "enable-default-cni-874000", held for 2.358246083s
	W0503 15:26:42.786295   10934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:42.799022   10934 out.go:177] 
	W0503 15:26:42.802091   10934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:26:42.802121   10934 out.go:239] * 
	* 
	W0503 15:26:42.804782   10934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:26:42.811991   10934 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.77386725s)

                                                
                                                
-- stdout --
	* [bridge-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-874000" primary control-plane node in "bridge-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:26:45.115169   11060 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:26:45.115278   11060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:45.115281   11060 out.go:304] Setting ErrFile to fd 2...
	I0503 15:26:45.115283   11060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:45.115415   11060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:26:45.116521   11060 out.go:298] Setting JSON to false
	I0503 15:26:45.132781   11060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5176,"bootTime":1714770029,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:26:45.132854   11060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:26:45.139834   11060 out.go:177] * [bridge-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:26:45.147764   11060 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:26:45.151794   11060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:26:45.147794   11060 notify.go:220] Checking for updates...
	I0503 15:26:45.157731   11060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:26:45.160776   11060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:26:45.163786   11060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:26:45.166759   11060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:26:45.170032   11060 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:26:45.170095   11060 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:26:45.170139   11060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:26:45.174819   11060 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:26:45.181756   11060 start.go:297] selected driver: qemu2
	I0503 15:26:45.181765   11060 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:26:45.181772   11060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:26:45.183917   11060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:26:45.186745   11060 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:26:45.189807   11060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:26:45.189831   11060 cni.go:84] Creating CNI manager for "bridge"
	I0503 15:26:45.189835   11060 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:26:45.189861   11060 start.go:340] cluster config:
	{Name:bridge-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:bridge-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:26:45.194148   11060 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:26:45.201734   11060 out.go:177] * Starting "bridge-874000" primary control-plane node in "bridge-874000" cluster
	I0503 15:26:45.205711   11060 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:26:45.205724   11060 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:26:45.205731   11060 cache.go:56] Caching tarball of preloaded images
	I0503 15:26:45.205788   11060 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:26:45.205793   11060 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:26:45.205838   11060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/bridge-874000/config.json ...
	I0503 15:26:45.205848   11060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/bridge-874000/config.json: {Name:mk3229ea01967a96c505616a5bd4a2573d71130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:26:45.206047   11060 start.go:360] acquireMachinesLock for bridge-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:45.206078   11060 start.go:364] duration metric: took 25.708µs to acquireMachinesLock for "bridge-874000"
	I0503 15:26:45.206088   11060 start.go:93] Provisioning new machine with config: &{Name:bridge-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:45.206113   11060 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:45.213685   11060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:45.228361   11060 start.go:159] libmachine.API.Create for "bridge-874000" (driver="qemu2")
	I0503 15:26:45.228394   11060 client.go:168] LocalClient.Create starting
	I0503 15:26:45.228468   11060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:45.228503   11060 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:45.228512   11060 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:45.228557   11060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:45.228580   11060 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:45.228586   11060 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:45.228918   11060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:45.370551   11060 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:45.483552   11060 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:45.483560   11060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:45.483719   11060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2
	I0503 15:26:45.496507   11060 main.go:141] libmachine: STDOUT: 
	I0503 15:26:45.496529   11060 main.go:141] libmachine: STDERR: 
	I0503 15:26:45.496589   11060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2 +20000M
	I0503 15:26:45.508129   11060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:45.508146   11060 main.go:141] libmachine: STDERR: 
	I0503 15:26:45.508171   11060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2
	I0503 15:26:45.508176   11060 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:45.508206   11060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:cf:8f:23:a9:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2
	I0503 15:26:45.509995   11060 main.go:141] libmachine: STDOUT: 
	I0503 15:26:45.510010   11060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:45.510027   11060 client.go:171] duration metric: took 281.741375ms to LocalClient.Create
	I0503 15:26:47.511395   11060 start.go:128] duration metric: took 2.306124875s to createHost
	I0503 15:26:47.511459   11060 start.go:83] releasing machines lock for "bridge-874000", held for 2.306228625s
	W0503 15:26:47.511501   11060 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:47.522885   11060 out.go:177] * Deleting "bridge-874000" in qemu2 ...
	W0503 15:26:47.537756   11060 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:47.537772   11060 start.go:728] Will try again in 5 seconds ...
	I0503 15:26:52.538466   11060 start.go:360] acquireMachinesLock for bridge-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:52.538983   11060 start.go:364] duration metric: took 381.625µs to acquireMachinesLock for "bridge-874000"
	I0503 15:26:52.539112   11060 start.go:93] Provisioning new machine with config: &{Name:bridge-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:bridge-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:52.539516   11060 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:52.545146   11060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:52.593806   11060 start.go:159] libmachine.API.Create for "bridge-874000" (driver="qemu2")
	I0503 15:26:52.593990   11060 client.go:168] LocalClient.Create starting
	I0503 15:26:52.594103   11060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:52.594159   11060 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:52.594176   11060 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:52.594246   11060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:52.594291   11060 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:52.594305   11060 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:52.594870   11060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:52.746620   11060 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:52.788936   11060 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:52.788944   11060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:52.789126   11060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2
	I0503 15:26:52.802034   11060 main.go:141] libmachine: STDOUT: 
	I0503 15:26:52.802063   11060 main.go:141] libmachine: STDERR: 
	I0503 15:26:52.802117   11060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2 +20000M
	I0503 15:26:52.813560   11060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:52.813581   11060 main.go:141] libmachine: STDERR: 
	I0503 15:26:52.813600   11060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2
	I0503 15:26:52.813606   11060 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:52.813658   11060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:de:9d:73:2a:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/bridge-874000/disk.qcow2
	I0503 15:26:52.815362   11060 main.go:141] libmachine: STDOUT: 
	I0503 15:26:52.815388   11060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:52.815401   11060 client.go:171] duration metric: took 221.460583ms to LocalClient.Create
	I0503 15:26:54.817127   11060 start.go:128] duration metric: took 2.278086709s to createHost
	I0503 15:26:54.817211   11060 start.go:83] releasing machines lock for "bridge-874000", held for 2.278753583s
	W0503 15:26:54.817601   11060 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:54.831478   11060 out.go:177] 
	W0503 15:26:54.834435   11060 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:26:54.834461   11060 out.go:239] * 
	* 
	W0503 15:26:54.835933   11060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:26:54.844436   11060 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.9256645s)

                                                
                                                
-- stdout --
	* [kubenet-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-874000" primary control-plane node in "kubenet-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:26:57.090195   11185 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:26:57.090327   11185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:57.090330   11185 out.go:304] Setting ErrFile to fd 2...
	I0503 15:26:57.090332   11185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:26:57.090480   11185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:26:57.091565   11185 out.go:298] Setting JSON to false
	I0503 15:26:57.108636   11185 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5188,"bootTime":1714770029,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:26:57.108710   11185 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:26:57.114964   11185 out.go:177] * [kubenet-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:26:57.122823   11185 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:26:57.127848   11185 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:26:57.122892   11185 notify.go:220] Checking for updates...
	I0503 15:26:57.130838   11185 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:26:57.133804   11185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:26:57.136885   11185 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:26:57.139853   11185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:26:57.143227   11185 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:26:57.143301   11185 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:26:57.143345   11185 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:26:57.147804   11185 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:26:57.154780   11185 start.go:297] selected driver: qemu2
	I0503 15:26:57.154788   11185 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:26:57.154794   11185 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:26:57.157103   11185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:26:57.160848   11185 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:26:57.163921   11185 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:26:57.163955   11185 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0503 15:26:57.163983   11185 start.go:340] cluster config:
	{Name:kubenet-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubenet-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:26:57.168698   11185 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:26:57.174822   11185 out.go:177] * Starting "kubenet-874000" primary control-plane node in "kubenet-874000" cluster
	I0503 15:26:57.178705   11185 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:26:57.178720   11185 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:26:57.178730   11185 cache.go:56] Caching tarball of preloaded images
	I0503 15:26:57.178801   11185 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:26:57.178806   11185 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:26:57.178867   11185 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kubenet-874000/config.json ...
	I0503 15:26:57.178878   11185 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/kubenet-874000/config.json: {Name:mkfa44a0ae9da193b341106b0cce5ce4c82d1502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:26:57.179087   11185 start.go:360] acquireMachinesLock for kubenet-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:26:57.179118   11185 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "kubenet-874000"
	I0503 15:26:57.179129   11185 start.go:93] Provisioning new machine with config: &{Name:kubenet-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:26:57.179156   11185 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:26:57.185301   11185 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:26:57.201845   11185 start.go:159] libmachine.API.Create for "kubenet-874000" (driver="qemu2")
	I0503 15:26:57.201879   11185 client.go:168] LocalClient.Create starting
	I0503 15:26:57.201947   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:26:57.201977   11185 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:57.201986   11185 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:57.202028   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:26:57.202050   11185 main.go:141] libmachine: Decoding PEM data...
	I0503 15:26:57.202056   11185 main.go:141] libmachine: Parsing certificate...
	I0503 15:26:57.202386   11185 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:26:57.344660   11185 main.go:141] libmachine: Creating SSH key...
	I0503 15:26:57.421116   11185 main.go:141] libmachine: Creating Disk image...
	I0503 15:26:57.421122   11185 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:26:57.421296   11185 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2
	I0503 15:26:57.433678   11185 main.go:141] libmachine: STDOUT: 
	I0503 15:26:57.433700   11185 main.go:141] libmachine: STDERR: 
	I0503 15:26:57.433752   11185 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2 +20000M
	I0503 15:26:57.444959   11185 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:26:57.444978   11185 main.go:141] libmachine: STDERR: 
	I0503 15:26:57.445000   11185 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2
	I0503 15:26:57.445007   11185 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:26:57.445040   11185 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:b8:e5:03:b3:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2
	I0503 15:26:57.446854   11185 main.go:141] libmachine: STDOUT: 
	I0503 15:26:57.446869   11185 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:26:57.446894   11185 client.go:171] duration metric: took 245.052708ms to LocalClient.Create
	I0503 15:26:59.448838   11185 start.go:128] duration metric: took 2.270042792s to createHost
	I0503 15:26:59.448940   11185 start.go:83] releasing machines lock for "kubenet-874000", held for 2.270235083s
	W0503 15:26:59.449018   11185 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:59.463444   11185 out.go:177] * Deleting "kubenet-874000" in qemu2 ...
	W0503 15:26:59.487563   11185 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:26:59.487591   11185 start.go:728] Will try again in 5 seconds ...
	I0503 15:27:04.489072   11185 start.go:360] acquireMachinesLock for kubenet-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:04.489595   11185 start.go:364] duration metric: took 429.375µs to acquireMachinesLock for "kubenet-874000"
	I0503 15:27:04.489686   11185 start.go:93] Provisioning new machine with config: &{Name:kubenet-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:kubenet-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:04.489971   11185 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:04.499552   11185 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:04.547170   11185 start.go:159] libmachine.API.Create for "kubenet-874000" (driver="qemu2")
	I0503 15:27:04.547237   11185 client.go:168] LocalClient.Create starting
	I0503 15:27:04.547392   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:04.547461   11185 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:04.547475   11185 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:04.547541   11185 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:04.547589   11185 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:04.547604   11185 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:04.548269   11185 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:04.701040   11185 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:04.916585   11185 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:04.916595   11185 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:04.916811   11185 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2
	I0503 15:27:04.929939   11185 main.go:141] libmachine: STDOUT: 
	I0503 15:27:04.929965   11185 main.go:141] libmachine: STDERR: 
	I0503 15:27:04.930030   11185 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2 +20000M
	I0503 15:27:04.941399   11185 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:04.941419   11185 main.go:141] libmachine: STDERR: 
	I0503 15:27:04.941431   11185 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2
	I0503 15:27:04.941436   11185 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:04.941468   11185 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:06:78:08:42:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/kubenet-874000/disk.qcow2
	I0503 15:27:04.943251   11185 main.go:141] libmachine: STDOUT: 
	I0503 15:27:04.943274   11185 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:04.943291   11185 client.go:171] duration metric: took 396.099917ms to LocalClient.Create
	I0503 15:27:06.945349   11185 start.go:128] duration metric: took 2.455631041s to createHost
	I0503 15:27:06.945462   11185 start.go:83] releasing machines lock for "kubenet-874000", held for 2.456127417s
	W0503 15:27:06.945804   11185 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:06.955016   11185 out.go:177] 
	W0503 15:27:06.959011   11185 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:27:06.959061   11185 out.go:239] * 
	* 
	W0503 15:27:06.961255   11185 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:27:06.969081   11185 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.872174291s)

                                                
                                                
-- stdout --
	* [custom-flannel-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-874000" primary control-plane node in "custom-flannel-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:27:09.223075   11308 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:27:09.223207   11308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:09.223210   11308 out.go:304] Setting ErrFile to fd 2...
	I0503 15:27:09.223213   11308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:09.223334   11308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:27:09.224434   11308 out.go:298] Setting JSON to false
	I0503 15:27:09.240600   11308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5200,"bootTime":1714770029,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:27:09.240693   11308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:27:09.246831   11308 out.go:177] * [custom-flannel-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:27:09.254995   11308 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:27:09.258958   11308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:27:09.255049   11308 notify.go:220] Checking for updates...
	I0503 15:27:09.264950   11308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:27:09.267846   11308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:27:09.270955   11308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:27:09.274006   11308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:27:09.275906   11308 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:27:09.275970   11308 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:27:09.276013   11308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:27:09.279951   11308 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:27:09.286796   11308 start.go:297] selected driver: qemu2
	I0503 15:27:09.286802   11308 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:27:09.286808   11308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:27:09.289001   11308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:27:09.291878   11308 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:27:09.295113   11308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:27:09.295162   11308 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0503 15:27:09.295175   11308 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0503 15:27:09.295199   11308 start.go:340] cluster config:
	{Name:custom-flannel-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:27:09.299437   11308 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:27:09.313980   11308 out.go:177] * Starting "custom-flannel-874000" primary control-plane node in "custom-flannel-874000" cluster
	I0503 15:27:09.317992   11308 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:27:09.318019   11308 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:27:09.318031   11308 cache.go:56] Caching tarball of preloaded images
	I0503 15:27:09.318113   11308 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:27:09.318124   11308 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:27:09.318170   11308 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/custom-flannel-874000/config.json ...
	I0503 15:27:09.318182   11308 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/custom-flannel-874000/config.json: {Name:mkbd072c996de96ba1b3eb076cd25237604866ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:27:09.318433   11308 start.go:360] acquireMachinesLock for custom-flannel-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:09.318465   11308 start.go:364] duration metric: took 25.709µs to acquireMachinesLock for "custom-flannel-874000"
	I0503 15:27:09.318476   11308 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:09.318505   11308 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:09.325937   11308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:09.342314   11308 start.go:159] libmachine.API.Create for "custom-flannel-874000" (driver="qemu2")
	I0503 15:27:09.342354   11308 client.go:168] LocalClient.Create starting
	I0503 15:27:09.342430   11308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:09.342466   11308 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:09.342473   11308 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:09.342515   11308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:09.342539   11308 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:09.342544   11308 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:09.342899   11308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:09.488057   11308 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:09.595187   11308 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:09.595206   11308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:09.595405   11308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2
	I0503 15:27:09.608716   11308 main.go:141] libmachine: STDOUT: 
	I0503 15:27:09.608742   11308 main.go:141] libmachine: STDERR: 
	I0503 15:27:09.608818   11308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2 +20000M
	I0503 15:27:09.621363   11308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:09.621388   11308 main.go:141] libmachine: STDERR: 
	I0503 15:27:09.621418   11308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2
	I0503 15:27:09.621424   11308 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:09.621454   11308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:6a:af:a1:3b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2
	I0503 15:27:09.623553   11308 main.go:141] libmachine: STDOUT: 
	I0503 15:27:09.623576   11308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:09.623595   11308 client.go:171] duration metric: took 281.26425ms to LocalClient.Create
	I0503 15:27:11.625634   11308 start.go:128] duration metric: took 2.307325667s to createHost
	I0503 15:27:11.625720   11308 start.go:83] releasing machines lock for "custom-flannel-874000", held for 2.307478416s
	W0503 15:27:11.625783   11308 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:11.637367   11308 out.go:177] * Deleting "custom-flannel-874000" in qemu2 ...
	W0503 15:27:11.667845   11308 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:11.667870   11308 start.go:728] Will try again in 5 seconds ...
	I0503 15:27:16.669645   11308 start.go:360] acquireMachinesLock for custom-flannel-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:16.670087   11308 start.go:364] duration metric: took 307.666µs to acquireMachinesLock for "custom-flannel-874000"
	I0503 15:27:16.670175   11308 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.0 ClusterName:custom-flannel-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:16.670389   11308 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:16.675969   11308 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:16.719672   11308 start.go:159] libmachine.API.Create for "custom-flannel-874000" (driver="qemu2")
	I0503 15:27:16.719726   11308 client.go:168] LocalClient.Create starting
	I0503 15:27:16.719885   11308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:16.719959   11308 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:16.719980   11308 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:16.720054   11308 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:16.720105   11308 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:16.720119   11308 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:16.720904   11308 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:16.874854   11308 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:16.993047   11308 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:16.993055   11308 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:16.993222   11308 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2
	I0503 15:27:17.005665   11308 main.go:141] libmachine: STDOUT: 
	I0503 15:27:17.005685   11308 main.go:141] libmachine: STDERR: 
	I0503 15:27:17.005746   11308 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2 +20000M
	I0503 15:27:17.017045   11308 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:17.017064   11308 main.go:141] libmachine: STDERR: 
	I0503 15:27:17.017078   11308 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2
	I0503 15:27:17.017082   11308 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:17.017113   11308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:21:65:99:fc:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/custom-flannel-874000/disk.qcow2
	I0503 15:27:17.018847   11308 main.go:141] libmachine: STDOUT: 
	I0503 15:27:17.018861   11308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:17.018873   11308 client.go:171] duration metric: took 299.164583ms to LocalClient.Create
	I0503 15:27:19.020916   11308 start.go:128] duration metric: took 2.350667584s to createHost
	I0503 15:27:19.021052   11308 start.go:83] releasing machines lock for "custom-flannel-874000", held for 2.351062792s
	W0503 15:27:19.021375   11308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:19.030859   11308 out.go:177] 
	W0503 15:27:19.034993   11308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:27:19.035018   11308 out.go:239] * 
	* 
	W0503 15:27:19.036495   11308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:27:19.048858   11308 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.929331334s)

                                                
                                                
-- stdout --
	* [calico-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-874000" primary control-plane node in "calico-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:27:21.516244   11446 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:27:21.516390   11446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:21.516394   11446 out.go:304] Setting ErrFile to fd 2...
	I0503 15:27:21.516396   11446 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:21.516540   11446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:27:21.517590   11446 out.go:298] Setting JSON to false
	I0503 15:27:21.534935   11446 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5212,"bootTime":1714770029,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:27:21.535000   11446 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:27:21.540273   11446 out.go:177] * [calico-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:27:21.552233   11446 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:27:21.548186   11446 notify.go:220] Checking for updates...
	I0503 15:27:21.558218   11446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:27:21.562045   11446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:27:21.566188   11446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:27:21.569259   11446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:27:21.570687   11446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:27:21.574590   11446 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:27:21.574654   11446 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:27:21.574693   11446 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:27:21.579195   11446 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:27:21.584223   11446 start.go:297] selected driver: qemu2
	I0503 15:27:21.584228   11446 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:27:21.584233   11446 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:27:21.586423   11446 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:27:21.589204   11446 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:27:21.592314   11446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:27:21.592348   11446 cni.go:84] Creating CNI manager for "calico"
	I0503 15:27:21.592352   11446 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0503 15:27:21.592389   11446 start.go:340] cluster config:
	{Name:calico-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:calico-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:27:21.596745   11446 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:27:21.604254   11446 out.go:177] * Starting "calico-874000" primary control-plane node in "calico-874000" cluster
	I0503 15:27:21.608103   11446 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:27:21.608120   11446 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:27:21.608127   11446 cache.go:56] Caching tarball of preloaded images
	I0503 15:27:21.608185   11446 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:27:21.608189   11446 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:27:21.608231   11446 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/calico-874000/config.json ...
	I0503 15:27:21.608241   11446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/calico-874000/config.json: {Name:mkde7a1369c28b220a148312c6c691c6ab68fbfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:27:21.608633   11446 start.go:360] acquireMachinesLock for calico-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:21.608664   11446 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "calico-874000"
	I0503 15:27:21.608677   11446 start.go:93] Provisioning new machine with config: &{Name:calico-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:21.608707   11446 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:21.616237   11446 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:21.631033   11446 start.go:159] libmachine.API.Create for "calico-874000" (driver="qemu2")
	I0503 15:27:21.631053   11446 client.go:168] LocalClient.Create starting
	I0503 15:27:21.631112   11446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:21.631143   11446 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:21.631153   11446 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:21.631189   11446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:21.631210   11446 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:21.631216   11446 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:21.631558   11446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:21.775137   11446 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:21.970274   11446 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:21.970284   11446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:21.970835   11446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2
	I0503 15:27:21.983661   11446 main.go:141] libmachine: STDOUT: 
	I0503 15:27:21.983684   11446 main.go:141] libmachine: STDERR: 
	I0503 15:27:21.983733   11446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2 +20000M
	I0503 15:27:21.994736   11446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:21.994760   11446 main.go:141] libmachine: STDERR: 
	I0503 15:27:21.994776   11446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2
	I0503 15:27:21.994786   11446 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:21.994818   11446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:67:7b:aa:89:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2
	I0503 15:27:21.996690   11446 main.go:141] libmachine: STDOUT: 
	I0503 15:27:21.996705   11446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:21.996724   11446 client.go:171] duration metric: took 365.688666ms to LocalClient.Create
	I0503 15:27:23.998834   11446 start.go:128] duration metric: took 2.390238833s to createHost
	I0503 15:27:23.998916   11446 start.go:83] releasing machines lock for "calico-874000", held for 2.390385959s
	W0503 15:27:23.999050   11446 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:24.010341   11446 out.go:177] * Deleting "calico-874000" in qemu2 ...
	W0503 15:27:24.041436   11446 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:24.041473   11446 start.go:728] Will try again in 5 seconds ...
	I0503 15:27:29.043481   11446 start.go:360] acquireMachinesLock for calico-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:29.043899   11446 start.go:364] duration metric: took 295.667µs to acquireMachinesLock for "calico-874000"
	I0503 15:27:29.044022   11446 start.go:93] Provisioning new machine with config: &{Name:calico-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:calico-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:29.044245   11446 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:29.051703   11446 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:29.091352   11446 start.go:159] libmachine.API.Create for "calico-874000" (driver="qemu2")
	I0503 15:27:29.091412   11446 client.go:168] LocalClient.Create starting
	I0503 15:27:29.091524   11446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:29.091588   11446 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:29.091604   11446 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:29.091661   11446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:29.091698   11446 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:29.091708   11446 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:29.092197   11446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:29.243883   11446 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:29.346061   11446 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:29.346072   11446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:29.346245   11446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2
	I0503 15:27:29.358717   11446 main.go:141] libmachine: STDOUT: 
	I0503 15:27:29.358742   11446 main.go:141] libmachine: STDERR: 
	I0503 15:27:29.358798   11446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2 +20000M
	I0503 15:27:29.370049   11446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:29.370072   11446 main.go:141] libmachine: STDERR: 
	I0503 15:27:29.370085   11446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2
	I0503 15:27:29.370089   11446 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:29.370124   11446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:a4:0a:65:23:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/calico-874000/disk.qcow2
	I0503 15:27:29.371923   11446 main.go:141] libmachine: STDOUT: 
	I0503 15:27:29.371946   11446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:29.371958   11446 client.go:171] duration metric: took 280.546833ms to LocalClient.Create
	I0503 15:27:31.374072   11446 start.go:128] duration metric: took 2.329904125s to createHost
	I0503 15:27:31.374224   11446 start.go:83] releasing machines lock for "calico-874000", held for 2.330337125s
	W0503 15:27:31.374530   11446 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:31.383368   11446 out.go:177] 
	W0503 15:27:31.389373   11446 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:27:31.389406   11446 out.go:239] * 
	* 
	W0503 15:27:31.392125   11446 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:27:31.400386   11446 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-874000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.732733167s)

                                                
                                                
-- stdout --
	* [false-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-874000" primary control-plane node in "false-874000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-874000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:27:33.943817   11579 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:27:33.943941   11579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:33.943945   11579 out.go:304] Setting ErrFile to fd 2...
	I0503 15:27:33.943947   11579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:33.944074   11579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:27:33.945164   11579 out.go:298] Setting JSON to false
	I0503 15:27:33.961497   11579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5224,"bootTime":1714770029,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:27:33.961562   11579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:27:33.968171   11579 out.go:177] * [false-874000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:27:33.976121   11579 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:27:33.976170   11579 notify.go:220] Checking for updates...
	I0503 15:27:33.983158   11579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:27:33.986109   11579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:27:33.989190   11579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:27:33.992199   11579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:27:33.995188   11579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:27:33.998563   11579 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:27:33.998629   11579 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:27:33.998676   11579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:27:34.003185   11579 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:27:34.010152   11579 start.go:297] selected driver: qemu2
	I0503 15:27:34.010161   11579 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:27:34.010168   11579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:27:34.012435   11579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:27:34.016179   11579 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:27:34.017606   11579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:27:34.017636   11579 cni.go:84] Creating CNI manager for "false"
	I0503 15:27:34.017665   11579 start.go:340] cluster config:
	{Name:false-874000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:false-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:27:34.022048   11579 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:27:34.029153   11579 out.go:177] * Starting "false-874000" primary control-plane node in "false-874000" cluster
	I0503 15:27:34.033160   11579 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:27:34.033175   11579 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:27:34.033185   11579 cache.go:56] Caching tarball of preloaded images
	I0503 15:27:34.033249   11579 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:27:34.033255   11579 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:27:34.033311   11579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/false-874000/config.json ...
	I0503 15:27:34.033324   11579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/false-874000/config.json: {Name:mk4f6634ae7f99b2d684e2e0116dfc4ed9b44a80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:27:34.033546   11579 start.go:360] acquireMachinesLock for false-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:34.033577   11579 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "false-874000"
	I0503 15:27:34.033588   11579 start.go:93] Provisioning new machine with config: &{Name:false-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:34.033630   11579 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:34.042138   11579 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:34.058615   11579 start.go:159] libmachine.API.Create for "false-874000" (driver="qemu2")
	I0503 15:27:34.058639   11579 client.go:168] LocalClient.Create starting
	I0503 15:27:34.058699   11579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:34.058729   11579 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:34.058740   11579 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:34.058775   11579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:34.058797   11579 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:34.058806   11579 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:34.059213   11579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:34.201282   11579 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:34.300161   11579 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:34.300167   11579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:34.300327   11579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2
	I0503 15:27:34.313110   11579 main.go:141] libmachine: STDOUT: 
	I0503 15:27:34.313127   11579 main.go:141] libmachine: STDERR: 
	I0503 15:27:34.313202   11579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2 +20000M
	I0503 15:27:34.324264   11579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:34.324284   11579 main.go:141] libmachine: STDERR: 
	I0503 15:27:34.324306   11579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2
	I0503 15:27:34.324311   11579 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:34.324344   11579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:d2:b4:d1:7f:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2
	I0503 15:27:34.326080   11579 main.go:141] libmachine: STDOUT: 
	I0503 15:27:34.326095   11579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:34.326119   11579 client.go:171] duration metric: took 267.485334ms to LocalClient.Create
	I0503 15:27:36.328123   11579 start.go:128] duration metric: took 2.294582333s to createHost
	I0503 15:27:36.328160   11579 start.go:83] releasing machines lock for "false-874000", held for 2.294677875s
	W0503 15:27:36.328190   11579 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:36.332881   11579 out.go:177] * Deleting "false-874000" in qemu2 ...
	W0503 15:27:36.344529   11579 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:36.344537   11579 start.go:728] Will try again in 5 seconds ...
	I0503 15:27:41.346471   11579 start.go:360] acquireMachinesLock for false-874000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:41.346692   11579 start.go:364] duration metric: took 176.708µs to acquireMachinesLock for "false-874000"
	I0503 15:27:41.346716   11579 start.go:93] Provisioning new machine with config: &{Name:false-874000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:false-874000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:41.346782   11579 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:41.356082   11579 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0503 15:27:41.376462   11579 start.go:159] libmachine.API.Create for "false-874000" (driver="qemu2")
	I0503 15:27:41.376492   11579 client.go:168] LocalClient.Create starting
	I0503 15:27:41.376562   11579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:41.376603   11579 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:41.376612   11579 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:41.376651   11579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:41.376675   11579 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:41.376684   11579 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:41.376989   11579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:41.521334   11579 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:41.574131   11579 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:41.574139   11579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:41.574343   11579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2
	I0503 15:27:41.587395   11579 main.go:141] libmachine: STDOUT: 
	I0503 15:27:41.587475   11579 main.go:141] libmachine: STDERR: 
	I0503 15:27:41.587525   11579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2 +20000M
	I0503 15:27:41.599020   11579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:41.599048   11579 main.go:141] libmachine: STDERR: 
	I0503 15:27:41.599070   11579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2
	I0503 15:27:41.599073   11579 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:41.599112   11579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:ba:02:8d:1b:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/false-874000/disk.qcow2
	I0503 15:27:41.600904   11579 main.go:141] libmachine: STDOUT: 
	I0503 15:27:41.600920   11579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:41.600933   11579 client.go:171] duration metric: took 224.446084ms to LocalClient.Create
	I0503 15:27:43.603047   11579 start.go:128] duration metric: took 2.256316125s to createHost
	I0503 15:27:43.603123   11579 start.go:83] releasing machines lock for "false-874000", held for 2.256506667s
	W0503 15:27:43.603463   11579 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-874000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:43.612967   11579 out.go:177] 
	W0503 15:27:43.619128   11579 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:27:43.619149   11579 out.go:239] * 
	* 
	W0503 15:27:43.620784   11579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:27:43.630995   11579 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-698000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-698000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.059738042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-698000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-698000" primary control-plane node in "old-k8s-version-698000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-698000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:27:45.979707   11704 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:27:45.979832   11704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:45.979835   11704 out.go:304] Setting ErrFile to fd 2...
	I0503 15:27:45.979837   11704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:45.979968   11704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:27:45.981292   11704 out.go:298] Setting JSON to false
	I0503 15:27:45.998087   11704 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5236,"bootTime":1714770029,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:27:45.998157   11704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:27:46.003784   11704 out.go:177] * [old-k8s-version-698000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:27:46.012819   11704 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:27:46.015667   11704 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:27:46.012881   11704 notify.go:220] Checking for updates...
	I0503 15:27:46.018652   11704 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:27:46.021589   11704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:27:46.024552   11704 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:27:46.027608   11704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:27:46.030971   11704 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:27:46.031043   11704 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:27:46.031087   11704 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:27:46.035601   11704 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:27:46.042534   11704 start.go:297] selected driver: qemu2
	I0503 15:27:46.042540   11704 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:27:46.042555   11704 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:27:46.044914   11704 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:27:46.047539   11704 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:27:46.050708   11704 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:27:46.050738   11704 cni.go:84] Creating CNI manager for ""
	I0503 15:27:46.050746   11704 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0503 15:27:46.050776   11704 start.go:340] cluster config:
	{Name:old-k8s-version-698000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:27:46.055474   11704 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:27:46.062563   11704 out.go:177] * Starting "old-k8s-version-698000" primary control-plane node in "old-k8s-version-698000" cluster
	I0503 15:27:46.066424   11704 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:27:46.066439   11704 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:27:46.066446   11704 cache.go:56] Caching tarball of preloaded images
	I0503 15:27:46.066502   11704 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:27:46.066507   11704 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0503 15:27:46.066553   11704 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/old-k8s-version-698000/config.json ...
	I0503 15:27:46.066563   11704 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/old-k8s-version-698000/config.json: {Name:mk22ad553095a001397cedc806dc9c6851416889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:27:46.066943   11704 start.go:360] acquireMachinesLock for old-k8s-version-698000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:46.066974   11704 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "old-k8s-version-698000"
	I0503 15:27:46.066985   11704 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-698000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:46.067019   11704 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:46.075386   11704 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:27:46.091853   11704 start.go:159] libmachine.API.Create for "old-k8s-version-698000" (driver="qemu2")
	I0503 15:27:46.091884   11704 client.go:168] LocalClient.Create starting
	I0503 15:27:46.091960   11704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:46.091992   11704 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:46.092009   11704 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:46.092046   11704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:46.092069   11704 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:46.092076   11704 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:46.092444   11704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:46.247106   11704 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:46.460863   11704 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:46.460873   11704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:46.461078   11704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:27:46.474138   11704 main.go:141] libmachine: STDOUT: 
	I0503 15:27:46.474166   11704 main.go:141] libmachine: STDERR: 
	I0503 15:27:46.474220   11704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2 +20000M
	I0503 15:27:46.485479   11704 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:46.485498   11704 main.go:141] libmachine: STDERR: 
	I0503 15:27:46.485514   11704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:27:46.485520   11704 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:46.485557   11704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:16:b0:40:36:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:27:46.487392   11704 main.go:141] libmachine: STDOUT: 
	I0503 15:27:46.487407   11704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:46.487435   11704 client.go:171] duration metric: took 395.55ms to LocalClient.Create
	I0503 15:27:48.489585   11704 start.go:128] duration metric: took 2.422617542s to createHost
	I0503 15:27:48.489689   11704 start.go:83] releasing machines lock for "old-k8s-version-698000", held for 2.42279s
	W0503 15:27:48.489800   11704 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:48.497142   11704 out.go:177] * Deleting "old-k8s-version-698000" in qemu2 ...
	W0503 15:27:48.525795   11704 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:48.525833   11704 start.go:728] Will try again in 5 seconds ...
	I0503 15:27:53.527940   11704 start.go:360] acquireMachinesLock for old-k8s-version-698000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:27:53.528463   11704 start.go:364] duration metric: took 378.917µs to acquireMachinesLock for "old-k8s-version-698000"
	I0503 15:27:53.528600   11704 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-698000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:27:53.528838   11704 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:27:53.539458   11704 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:27:53.579043   11704 start.go:159] libmachine.API.Create for "old-k8s-version-698000" (driver="qemu2")
	I0503 15:27:53.579095   11704 client.go:168] LocalClient.Create starting
	I0503 15:27:53.579212   11704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:27:53.579272   11704 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:53.579290   11704 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:53.579355   11704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:27:53.579393   11704 main.go:141] libmachine: Decoding PEM data...
	I0503 15:27:53.579406   11704 main.go:141] libmachine: Parsing certificate...
	I0503 15:27:53.579957   11704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:27:53.730267   11704 main.go:141] libmachine: Creating SSH key...
	I0503 15:27:53.940819   11704 main.go:141] libmachine: Creating Disk image...
	I0503 15:27:53.940827   11704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:27:53.941036   11704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:27:53.954858   11704 main.go:141] libmachine: STDOUT: 
	I0503 15:27:53.954877   11704 main.go:141] libmachine: STDERR: 
	I0503 15:27:53.954949   11704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2 +20000M
	I0503 15:27:53.966037   11704 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:27:53.966059   11704 main.go:141] libmachine: STDERR: 
	I0503 15:27:53.966073   11704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:27:53.966078   11704 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:27:53.966104   11704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:57:7e:61:4b:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:27:53.967959   11704 main.go:141] libmachine: STDOUT: 
	I0503 15:27:53.967980   11704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:27:53.967992   11704 client.go:171] duration metric: took 388.904583ms to LocalClient.Create
	I0503 15:27:55.968996   11704 start.go:128] duration metric: took 2.440203292s to createHost
	I0503 15:27:55.969075   11704 start.go:83] releasing machines lock for "old-k8s-version-698000", held for 2.440670041s
	W0503 15:27:55.969366   11704 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-698000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-698000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:27:55.978775   11704 out.go:177] 
	W0503 15:27:55.983765   11704 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:27:55.983809   11704 out.go:239] * 
	* 
	W0503 15:27:55.985620   11704 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:27:55.993749   11704 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-698000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (65.008209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-698000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-698000 create -f testdata/busybox.yaml: exit status 1 (29.005084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-698000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-698000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (32.093916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (31.490833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-698000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-698000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-698000 describe deploy/metrics-server -n kube-system: exit status 1 (28.196917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-698000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-698000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (29.940333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-698000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-698000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.189137625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-698000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-698000" primary control-plane node in "old-k8s-version-698000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-698000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-698000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:27:59.979803   11777 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:27:59.979918   11777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:59.979921   11777 out.go:304] Setting ErrFile to fd 2...
	I0503 15:27:59.979923   11777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:27:59.980042   11777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:27:59.981104   11777 out.go:298] Setting JSON to false
	I0503 15:27:59.997269   11777 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5250,"bootTime":1714770029,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:27:59.997339   11777 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:00.001074   11777 out.go:177] * [old-k8s-version-698000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:00.009087   11777 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:00.012051   11777 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:00.009130   11777 notify.go:220] Checking for updates...
	I0503 15:28:00.018009   11777 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:00.019517   11777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:00.022997   11777 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:00.026050   11777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:00.029396   11777 config.go:182] Loaded profile config "old-k8s-version-698000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0503 15:28:00.033041   11777 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0503 15:28:00.036052   11777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:00.040074   11777 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:28:00.047009   11777 start.go:297] selected driver: qemu2
	I0503 15:28:00.047017   11777 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-698000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:00.047093   11777 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:00.049381   11777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:00.049430   11777 cni.go:84] Creating CNI manager for ""
	I0503 15:28:00.049437   11777 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0503 15:28:00.049467   11777 start.go:340] cluster config:
	{Name:old-k8s-version-698000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-698000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:00.053829   11777 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:00.060977   11777 out.go:177] * Starting "old-k8s-version-698000" primary control-plane node in "old-k8s-version-698000" cluster
	I0503 15:28:00.065076   11777 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:28:00.065090   11777 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:00.065098   11777 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:00.065174   11777 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:00.065179   11777 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0503 15:28:00.065236   11777 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/old-k8s-version-698000/config.json ...
	I0503 15:28:00.065618   11777 start.go:360] acquireMachinesLock for old-k8s-version-698000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:00.065643   11777 start.go:364] duration metric: took 19.959µs to acquireMachinesLock for "old-k8s-version-698000"
	I0503 15:28:00.065652   11777 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:00.065658   11777 fix.go:54] fixHost starting: 
	I0503 15:28:00.065761   11777 fix.go:112] recreateIfNeeded on old-k8s-version-698000: state=Stopped err=<nil>
	W0503 15:28:00.065771   11777 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:00.070016   11777 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-698000" ...
	I0503 15:28:00.078039   11777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:57:7e:61:4b:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:28:00.080032   11777 main.go:141] libmachine: STDOUT: 
	I0503 15:28:00.080052   11777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:00.080079   11777 fix.go:56] duration metric: took 14.422083ms for fixHost
	I0503 15:28:00.080083   11777 start.go:83] releasing machines lock for "old-k8s-version-698000", held for 14.4365ms
	W0503 15:28:00.080091   11777 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:00.080126   11777 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:00.080130   11777 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:05.082175   11777 start.go:360] acquireMachinesLock for old-k8s-version-698000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:05.082766   11777 start.go:364] duration metric: took 357µs to acquireMachinesLock for "old-k8s-version-698000"
	I0503 15:28:05.082953   11777 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:05.082976   11777 fix.go:54] fixHost starting: 
	I0503 15:28:05.083710   11777 fix.go:112] recreateIfNeeded on old-k8s-version-698000: state=Stopped err=<nil>
	W0503 15:28:05.083735   11777 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:05.093241   11777 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-698000" ...
	I0503 15:28:05.096487   11777 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:57:7e:61:4b:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/old-k8s-version-698000/disk.qcow2
	I0503 15:28:05.103086   11777 main.go:141] libmachine: STDOUT: 
	I0503 15:28:05.103145   11777 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:05.103223   11777 fix.go:56] duration metric: took 20.252625ms for fixHost
	I0503 15:28:05.103235   11777 start.go:83] releasing machines lock for "old-k8s-version-698000", held for 20.422833ms
	W0503 15:28:05.103389   11777 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-698000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-698000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:05.114803   11777 out.go:177] 
	W0503 15:28:05.118382   11777 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:05.118401   11777 out.go:239] * 
	* 
	W0503 15:28:05.119649   11777 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:05.129209   11777 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-698000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (46.2885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-698000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (31.835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-698000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-698000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-698000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.476041ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-698000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-698000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (34.786084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-698000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (38.747917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-698000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-698000 --alsologtostderr -v=1: exit status 83 (51.870875ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-698000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-698000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:05.395648   11807 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:05.395979   11807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:05.395984   11807 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:05.395987   11807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:05.396129   11807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:05.396355   11807 out.go:298] Setting JSON to false
	I0503 15:28:05.396365   11807 mustload.go:65] Loading cluster: old-k8s-version-698000
	I0503 15:28:05.396565   11807 config.go:182] Loaded profile config "old-k8s-version-698000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0503 15:28:05.400267   11807 out.go:177] * The control-plane node old-k8s-version-698000 host is not running: state=Stopped
	I0503 15:28:05.408255   11807 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-698000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-698000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (39.7465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (36.85925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-698000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-315000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-315000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.803660542s)

                                                
                                                
-- stdout --
	* [no-preload-315000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-315000" primary control-plane node in "no-preload-315000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-315000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:05.944129   11832 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:05.944255   11832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:05.944258   11832 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:05.944261   11832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:05.944390   11832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:05.945558   11832 out.go:298] Setting JSON to false
	I0503 15:28:05.963152   11832 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5256,"bootTime":1714770029,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:05.963226   11832 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:05.966260   11832 out.go:177] * [no-preload-315000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:05.973139   11832 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:05.973174   11832 notify.go:220] Checking for updates...
	I0503 15:28:05.980036   11832 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:05.983239   11832 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:05.986236   11832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:05.987685   11832 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:05.991168   11832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:05.994614   11832 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:05.994678   11832 config.go:182] Loaded profile config "stopped-upgrade-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0503 15:28:05.994721   11832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:05.999013   11832 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:28:06.006163   11832 start.go:297] selected driver: qemu2
	I0503 15:28:06.006170   11832 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:28:06.006176   11832 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:06.008434   11832 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:28:06.011246   11832 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:28:06.014362   11832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:06.014391   11832 cni.go:84] Creating CNI manager for ""
	I0503 15:28:06.014399   11832 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:06.014403   11832 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:28:06.014435   11832 start.go:340] cluster config:
	{Name:no-preload-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:06.018928   11832 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.025104   11832 out.go:177] * Starting "no-preload-315000" primary control-plane node in "no-preload-315000" cluster
	I0503 15:28:06.029223   11832 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:06.029301   11832 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/no-preload-315000/config.json ...
	I0503 15:28:06.029320   11832 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/no-preload-315000/config.json: {Name:mk82216a0c0ecef5ba43e8a8ef8735126fc3f1aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:28:06.029352   11832 cache.go:107] acquiring lock: {Name:mke48e50e1b163c1693d62c6d4b46294eaaa0554 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029350   11832 cache.go:107] acquiring lock: {Name:mk59b0ddfab93486e4257ae7d3522e99cb1ecff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029412   11832 cache.go:107] acquiring lock: {Name:mka891bb6046612ac161d6844e307f94c3f19486 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029423   11832 cache.go:107] acquiring lock: {Name:mk1788d0cc29eaa093d22f1caddd7bdb0a641d03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029457   11832 cache.go:107] acquiring lock: {Name:mkcd8c0d2ae47710eb50f4ba3a012be8fb6c6215 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029483   11832 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0503 15:28:06.029414   11832 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0503 15:28:06.029537   11832 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0503 15:28:06.029510   11832 cache.go:107] acquiring lock: {Name:mk4f3cdfbdd5042aff2863105ebef0814ce0bc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029591   11832 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0503 15:28:06.029626   11832 cache.go:107] acquiring lock: {Name:mk6a3e50b42106c6015a895e82180e6eab836442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029629   11832 start.go:360] acquireMachinesLock for no-preload-315000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:06.029647   11832 cache.go:107] acquiring lock: {Name:mkcb7524d6695a6e0ccb7d40be659cec25f4639d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:06.029576   11832 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 187.667µs
	I0503 15:28:06.029680   11832 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0503 15:28:06.029683   11832 start.go:364] duration metric: took 45.458µs to acquireMachinesLock for "no-preload-315000"
	I0503 15:28:06.029696   11832 start.go:93] Provisioning new machine with config: &{Name:no-preload-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:06.029742   11832 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:06.029748   11832 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0503 15:28:06.034256   11832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:06.029787   11832 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0503 15:28:06.029792   11832 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0503 15:28:06.029830   11832 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0503 15:28:06.040781   11832 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0503 15:28:06.040924   11832 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0503 15:28:06.041573   11832 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0503 15:28:06.051846   11832 start.go:159] libmachine.API.Create for "no-preload-315000" (driver="qemu2")
	I0503 15:28:06.051869   11832 client.go:168] LocalClient.Create starting
	I0503 15:28:06.051933   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:06.051962   11832 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:06.051971   11832 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:06.052014   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:06.052038   11832 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:06.052044   11832 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:06.052338   11832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:06.054172   11832 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0503 15:28:06.054198   11832 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0503 15:28:06.054583   11832 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0503 15:28:06.056914   11832 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0503 15:28:06.199987   11832 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:06.337843   11832 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:06.337866   11832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:06.338014   11832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:06.351414   11832 main.go:141] libmachine: STDOUT: 
	I0503 15:28:06.351442   11832 main.go:141] libmachine: STDERR: 
	I0503 15:28:06.351493   11832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2 +20000M
	I0503 15:28:06.362766   11832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:06.362786   11832 main.go:141] libmachine: STDERR: 
	I0503 15:28:06.362800   11832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:06.362805   11832 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:06.362831   11832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:a2:df:cb:14:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:06.364672   11832 main.go:141] libmachine: STDOUT: 
	I0503 15:28:06.364710   11832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:06.364729   11832 client.go:171] duration metric: took 312.864625ms to LocalClient.Create
	I0503 15:28:06.961584   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0503 15:28:06.989775   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0
	I0503 15:28:06.997524   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0
	I0503 15:28:07.001623   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0503 15:28:07.122575   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0503 15:28:07.122667   11832 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.093233917s
	I0503 15:28:07.122721   11832 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0503 15:28:07.146064   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0
	I0503 15:28:07.155971   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0503 15:28:07.170321   11832 cache.go:162] opening:  /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0503 15:28:08.364799   11832 start.go:128] duration metric: took 2.335116542s to createHost
	I0503 15:28:08.364827   11832 start.go:83] releasing machines lock for "no-preload-315000", held for 2.335209417s
	W0503 15:28:08.364854   11832 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:08.374765   11832 out.go:177] * Deleting "no-preload-315000" in qemu2 ...
	W0503 15:28:08.396804   11832 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:08.396820   11832 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:09.412249   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0503 15:28:09.412279   11832 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.382781s
	I0503 15:28:09.412294   11832 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0503 15:28:09.709142   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0503 15:28:09.709155   11832 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 3.679620875s
	I0503 15:28:09.709161   11832 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0503 15:28:09.739248   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0503 15:28:09.739257   11832 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 3.709947875s
	I0503 15:28:09.739262   11832 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0503 15:28:11.395294   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0503 15:28:11.395340   11832 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 5.366108875s
	I0503 15:28:11.395384   11832 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0503 15:28:11.828212   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0503 15:28:11.828240   11832 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 5.79906625s
	I0503 15:28:11.828253   11832 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0503 15:28:13.396847   11832 start.go:360] acquireMachinesLock for no-preload-315000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:13.397086   11832 start.go:364] duration metric: took 198.375µs to acquireMachinesLock for "no-preload-315000"
	I0503 15:28:13.397154   11832 start.go:93] Provisioning new machine with config: &{Name:no-preload-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:no-preload-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:13.397243   11832 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:13.409120   11832 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:13.442515   11832 start.go:159] libmachine.API.Create for "no-preload-315000" (driver="qemu2")
	I0503 15:28:13.442567   11832 client.go:168] LocalClient.Create starting
	I0503 15:28:13.442676   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:13.442759   11832 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:13.442778   11832 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:13.442844   11832 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:13.442883   11832 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:13.442899   11832 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:13.443384   11832 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:13.589695   11832 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:13.642588   11832 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:13.642594   11832 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:13.642754   11832 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:13.655753   11832 main.go:141] libmachine: STDOUT: 
	I0503 15:28:13.655773   11832 main.go:141] libmachine: STDERR: 
	I0503 15:28:13.655829   11832 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2 +20000M
	I0503 15:28:13.667465   11832 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:13.667486   11832 main.go:141] libmachine: STDERR: 
	I0503 15:28:13.667506   11832 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:13.667509   11832 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:13.667552   11832 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e6:d7:20:bc:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:13.669397   11832 main.go:141] libmachine: STDOUT: 
	I0503 15:28:13.669413   11832 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:13.669448   11832 client.go:171] duration metric: took 226.878875ms to LocalClient.Create
	I0503 15:28:14.672052   11832 cache.go:157] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0503 15:28:14.672116   11832 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 8.642882125s
	I0503 15:28:14.672138   11832 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0503 15:28:14.672175   11832 cache.go:87] Successfully saved all images to host disk.
	I0503 15:28:15.671577   11832 start.go:128] duration metric: took 2.274368375s to createHost
	I0503 15:28:15.671649   11832 start.go:83] releasing machines lock for "no-preload-315000", held for 2.274613667s
	W0503 15:28:15.671945   11832 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-315000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:15.687671   11832 out.go:177] 
	W0503 15:28:15.691701   11832 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:15.691755   11832 out.go:239] * 
	* 
	W0503 15:28:15.694242   11832 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:15.702584   11832 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-315000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (52.565875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-315000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-315000 create -f testdata/busybox.yaml: exit status 1 (28.772166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-315000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-315000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (32.293625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (31.93775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-315000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-315000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-315000 describe deploy/metrics-server -n kube-system: exit status 1 (27.722375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-315000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-315000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (33.33725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-315000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-315000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.188370083s)

                                                
                                                
-- stdout --
	* [no-preload-315000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-315000" primary control-plane node in "no-preload-315000" cluster
	* Restarting existing qemu2 VM for "no-preload-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-315000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:19.480843   11928 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:19.480960   11928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:19.480963   11928 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:19.480966   11928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:19.481087   11928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:19.482202   11928 out.go:298] Setting JSON to false
	I0503 15:28:19.498249   11928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5270,"bootTime":1714770029,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:19.498322   11928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:19.502542   11928 out.go:177] * [no-preload-315000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:19.509594   11928 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:19.512614   11928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:19.509656   11928 notify.go:220] Checking for updates...
	I0503 15:28:19.518590   11928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:19.521620   11928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:19.524612   11928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:19.527568   11928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:19.530861   11928 config.go:182] Loaded profile config "no-preload-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:19.531100   11928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:19.534534   11928 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:28:19.541548   11928 start.go:297] selected driver: qemu2
	I0503 15:28:19.541555   11928 start.go:901] validating driver "qemu2" against &{Name:no-preload-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:no-preload-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:19.541614   11928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:19.543852   11928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:19.543886   11928 cni.go:84] Creating CNI manager for ""
	I0503 15:28:19.543893   11928 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:19.543908   11928 start.go:340] cluster config:
	{Name:no-preload-315000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-315000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:19.548041   11928 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.553596   11928 out.go:177] * Starting "no-preload-315000" primary control-plane node in "no-preload-315000" cluster
	I0503 15:28:19.557580   11928 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:19.557666   11928 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/no-preload-315000/config.json ...
	I0503 15:28:19.557727   11928 cache.go:107] acquiring lock: {Name:mke48e50e1b163c1693d62c6d4b46294eaaa0554 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557777   11928 cache.go:107] acquiring lock: {Name:mkcd8c0d2ae47710eb50f4ba3a012be8fb6c6215 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557795   11928 cache.go:107] acquiring lock: {Name:mk4f3cdfbdd5042aff2863105ebef0814ce0bc2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557801   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0503 15:28:19.557762   11928 cache.go:107] acquiring lock: {Name:mka891bb6046612ac161d6844e307f94c3f19486 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557807   11928 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.459µs
	I0503 15:28:19.557813   11928 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0503 15:28:19.557837   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0503 15:28:19.557836   11928 cache.go:107] acquiring lock: {Name:mk59b0ddfab93486e4257ae7d3522e99cb1ecff2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557849   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0503 15:28:19.557849   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0503 15:28:19.557852   11928 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 76.583µs
	I0503 15:28:19.557847   11928 cache.go:107] acquiring lock: {Name:mk6a3e50b42106c6015a895e82180e6eab836442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557855   11928 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0503 15:28:19.557854   11928 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 112.958µs
	I0503 15:28:19.557859   11928 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0503 15:28:19.557841   11928 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 46.542µs
	I0503 15:28:19.557862   11928 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0503 15:28:19.557862   11928 cache.go:107] acquiring lock: {Name:mk1788d0cc29eaa093d22f1caddd7bdb0a641d03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557887   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0503 15:28:19.557894   11928 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 47.958µs
	I0503 15:28:19.557896   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0503 15:28:19.557893   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0503 15:28:19.557897   11928 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0503 15:28:19.557899   11928 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 37.459µs
	I0503 15:28:19.557902   11928 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0503 15:28:19.557901   11928 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 93.584µs
	I0503 15:28:19.557906   11928 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0503 15:28:19.557904   11928 cache.go:107] acquiring lock: {Name:mkcb7524d6695a6e0ccb7d40be659cec25f4639d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:19.557989   11928 cache.go:115] /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0503 15:28:19.557994   11928 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 138.75µs
	I0503 15:28:19.558000   11928 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0503 15:28:19.558005   11928 cache.go:87] Successfully saved all images to host disk.
	I0503 15:28:19.558062   11928 start.go:360] acquireMachinesLock for no-preload-315000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:19.558090   11928 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "no-preload-315000"
	I0503 15:28:19.558099   11928 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:19.558104   11928 fix.go:54] fixHost starting: 
	I0503 15:28:19.558207   11928 fix.go:112] recreateIfNeeded on no-preload-315000: state=Stopped err=<nil>
	W0503 15:28:19.558215   11928 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:19.565575   11928 out.go:177] * Restarting existing qemu2 VM for "no-preload-315000" ...
	I0503 15:28:19.569506   11928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e6:d7:20:bc:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:19.571462   11928 main.go:141] libmachine: STDOUT: 
	I0503 15:28:19.571479   11928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:19.571502   11928 fix.go:56] duration metric: took 13.398ms for fixHost
	I0503 15:28:19.571506   11928 start.go:83] releasing machines lock for "no-preload-315000", held for 13.412542ms
	W0503 15:28:19.571513   11928 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:19.571536   11928 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:19.571540   11928 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:24.573611   11928 start.go:360] acquireMachinesLock for no-preload-315000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:24.574052   11928 start.go:364] duration metric: took 344.708µs to acquireMachinesLock for "no-preload-315000"
	I0503 15:28:24.574163   11928 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:24.574184   11928 fix.go:54] fixHost starting: 
	I0503 15:28:24.574980   11928 fix.go:112] recreateIfNeeded on no-preload-315000: state=Stopped err=<nil>
	W0503 15:28:24.575005   11928 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:24.590594   11928 out.go:177] * Restarting existing qemu2 VM for "no-preload-315000" ...
	I0503 15:28:24.594829   11928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e6:d7:20:bc:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/no-preload-315000/disk.qcow2
	I0503 15:28:24.604316   11928 main.go:141] libmachine: STDOUT: 
	I0503 15:28:24.604394   11928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:24.604499   11928 fix.go:56] duration metric: took 30.319083ms for fixHost
	I0503 15:28:24.604522   11928 start.go:83] releasing machines lock for "no-preload-315000", held for 30.447375ms
	W0503 15:28:24.604791   11928 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-315000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:24.612547   11928 out.go:177] 
	W0503 15:28:24.615677   11928 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:24.615725   11928 out.go:239] * 
	* 
	W0503 15:28:24.618182   11928 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:24.625593   11928 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-315000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (68.252041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.902057s)

                                                
                                                
-- stdout --
	* [embed-certs-347000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-347000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:21.746385   11940 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:21.746519   11940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:21.746522   11940 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:21.746525   11940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:21.746657   11940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:21.747737   11940 out.go:298] Setting JSON to false
	I0503 15:28:21.763922   11940 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5272,"bootTime":1714770029,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:21.764001   11940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:21.768276   11940 out.go:177] * [embed-certs-347000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:21.775244   11940 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:21.778229   11940 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:21.775296   11940 notify.go:220] Checking for updates...
	I0503 15:28:21.785144   11940 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:21.788257   11940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:21.791191   11940 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:21.794178   11940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:21.797575   11940 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:21.797651   11940 config.go:182] Loaded profile config "no-preload-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:21.797700   11940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:21.801091   11940 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:28:21.808150   11940 start.go:297] selected driver: qemu2
	I0503 15:28:21.808158   11940 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:28:21.808165   11940 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:21.810432   11940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:28:21.811711   11940 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:28:21.814232   11940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:21.814269   11940 cni.go:84] Creating CNI manager for ""
	I0503 15:28:21.814278   11940 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:21.814282   11940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:28:21.814312   11940 start.go:340] cluster config:
	{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:21.818944   11940 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:21.826207   11940 out.go:177] * Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	I0503 15:28:21.830145   11940 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:21.830161   11940 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:21.830170   11940 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:21.830234   11940 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:21.830240   11940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:28:21.830302   11940 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/embed-certs-347000/config.json ...
	I0503 15:28:21.830316   11940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/embed-certs-347000/config.json: {Name:mke08e1a95fa78fdbe8f54bb5c45af5c367ec872 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:28:21.830731   11940 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:21.830763   11940 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "embed-certs-347000"
	I0503 15:28:21.830775   11940 start.go:93] Provisioning new machine with config: &{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:21.830802   11940 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:21.838164   11940 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:21.855771   11940 start.go:159] libmachine.API.Create for "embed-certs-347000" (driver="qemu2")
	I0503 15:28:21.855793   11940 client.go:168] LocalClient.Create starting
	I0503 15:28:21.855854   11940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:21.855886   11940 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:21.855896   11940 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:21.855934   11940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:21.855956   11940 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:21.855976   11940 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:21.856381   11940 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:22.020835   11940 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:22.131814   11940 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:22.131821   11940 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:22.132000   11940 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:22.144494   11940 main.go:141] libmachine: STDOUT: 
	I0503 15:28:22.144515   11940 main.go:141] libmachine: STDERR: 
	I0503 15:28:22.144555   11940 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2 +20000M
	I0503 15:28:22.155630   11940 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:22.155655   11940 main.go:141] libmachine: STDERR: 
	I0503 15:28:22.155670   11940 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:22.155674   11940 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:22.155705   11940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:5f:16:93:f5:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:22.157508   11940 main.go:141] libmachine: STDOUT: 
	I0503 15:28:22.157525   11940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:22.157548   11940 client.go:171] duration metric: took 301.7595ms to LocalClient.Create
	I0503 15:28:24.159750   11940 start.go:128] duration metric: took 2.32896375s to createHost
	I0503 15:28:24.159871   11940 start.go:83] releasing machines lock for "embed-certs-347000", held for 2.32914225s
	W0503 15:28:24.159937   11940 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:24.176183   11940 out.go:177] * Deleting "embed-certs-347000" in qemu2 ...
	W0503 15:28:24.202825   11940 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:24.202855   11940 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:29.203943   11940 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:29.204351   11940 start.go:364] duration metric: took 310.083µs to acquireMachinesLock for "embed-certs-347000"
	I0503 15:28:29.204479   11940 start.go:93] Provisioning new machine with config: &{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:29.204736   11940 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:29.214402   11940 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:29.265014   11940 start.go:159] libmachine.API.Create for "embed-certs-347000" (driver="qemu2")
	I0503 15:28:29.265066   11940 client.go:168] LocalClient.Create starting
	I0503 15:28:29.265178   11940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:29.265245   11940 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:29.265265   11940 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:29.265332   11940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:29.265376   11940 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:29.265387   11940 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:29.266219   11940 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:29.427113   11940 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:29.547462   11940 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:29.547469   11940 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:29.547635   11940 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:29.560230   11940 main.go:141] libmachine: STDOUT: 
	I0503 15:28:29.560255   11940 main.go:141] libmachine: STDERR: 
	I0503 15:28:29.560312   11940 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2 +20000M
	I0503 15:28:29.571346   11940 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:29.571366   11940 main.go:141] libmachine: STDERR: 
	I0503 15:28:29.571376   11940 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:29.571382   11940 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:29.571423   11940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ce:70:9c:d6:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:29.573073   11940 main.go:141] libmachine: STDOUT: 
	I0503 15:28:29.573092   11940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:29.573104   11940 client.go:171] duration metric: took 308.043042ms to LocalClient.Create
	I0503 15:28:31.575269   11940 start.go:128] duration metric: took 2.370555958s to createHost
	I0503 15:28:31.575385   11940 start.go:83] releasing machines lock for "embed-certs-347000", held for 2.371079833s
	W0503 15:28:31.575789   11940 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:31.585384   11940 out.go:177] 
	W0503 15:28:31.591574   11940 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:31.591633   11940 out.go:239] * 
	* 
	W0503 15:28:31.594271   11940 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:31.602480   11940 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (67.608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-315000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (33.240041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-315000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-315000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-315000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.888333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-315000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-315000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (31.941625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-315000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (30.036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-315000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-315000 --alsologtostderr -v=1: exit status 83 (42.450416ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-315000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-315000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:24.902191   11966 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:24.902320   11966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:24.902323   11966 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:24.902326   11966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:24.902455   11966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:24.902652   11966 out.go:298] Setting JSON to false
	I0503 15:28:24.902661   11966 mustload.go:65] Loading cluster: no-preload-315000
	I0503 15:28:24.902845   11966 config.go:182] Loaded profile config "no-preload-315000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:24.907127   11966 out.go:177] * The control-plane node no-preload-315000 host is not running: state=Stopped
	I0503 15:28:24.911000   11966 out.go:177]   To start a cluster, run: "minikube start -p no-preload-315000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-315000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (30.258041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (30.917583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-315000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.76430875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-349000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-349000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:25.598135   12001 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:25.598282   12001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:25.598286   12001 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:25.598288   12001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:25.598437   12001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:25.599516   12001 out.go:298] Setting JSON to false
	I0503 15:28:25.615712   12001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5276,"bootTime":1714770029,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:25.615772   12001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:25.620377   12001 out.go:177] * [default-k8s-diff-port-349000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:25.627309   12001 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:25.627356   12001 notify.go:220] Checking for updates...
	I0503 15:28:25.631320   12001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:25.634229   12001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:25.637257   12001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:25.640266   12001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:25.643244   12001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:25.646683   12001 config.go:182] Loaded profile config "embed-certs-347000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:25.646742   12001 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:25.646787   12001 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:25.651297   12001 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:28:25.658292   12001 start.go:297] selected driver: qemu2
	I0503 15:28:25.658300   12001 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:28:25.658307   12001 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:25.660660   12001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:28:25.664284   12001 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:28:25.667493   12001 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:25.667533   12001 cni.go:84] Creating CNI manager for ""
	I0503 15:28:25.667542   12001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:25.667546   12001 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:28:25.667582   12001 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:25.672096   12001 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:25.679076   12001 out.go:177] * Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	I0503 15:28:25.683268   12001 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:25.683286   12001 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:25.683302   12001 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:25.683375   12001 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:25.683380   12001 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:28:25.683428   12001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/default-k8s-diff-port-349000/config.json ...
	I0503 15:28:25.683439   12001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/default-k8s-diff-port-349000/config.json: {Name:mk154d4d100ee85b35a2c42c5f36cf6506e7dbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:28:25.683652   12001 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:25.683703   12001 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0503 15:28:25.683714   12001 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:25.683752   12001 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:25.691240   12001 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:25.707966   12001 start.go:159] libmachine.API.Create for "default-k8s-diff-port-349000" (driver="qemu2")
	I0503 15:28:25.707993   12001 client.go:168] LocalClient.Create starting
	I0503 15:28:25.708061   12001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:25.708097   12001 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:25.708105   12001 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:25.708139   12001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:25.708164   12001 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:25.708172   12001 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:25.708519   12001 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:25.852799   12001 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:25.936596   12001 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:25.936601   12001 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:25.936757   12001 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:25.949411   12001 main.go:141] libmachine: STDOUT: 
	I0503 15:28:25.949432   12001 main.go:141] libmachine: STDERR: 
	I0503 15:28:25.949484   12001 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2 +20000M
	I0503 15:28:25.960294   12001 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:25.960308   12001 main.go:141] libmachine: STDERR: 
	I0503 15:28:25.960326   12001 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:25.960331   12001 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:25.960359   12001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:cb:8d:21:90:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:25.962029   12001 main.go:141] libmachine: STDOUT: 
	I0503 15:28:25.962042   12001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:25.962058   12001 client.go:171] duration metric: took 254.0685ms to LocalClient.Create
	I0503 15:28:27.964176   12001 start.go:128] duration metric: took 2.280471042s to createHost
	I0503 15:28:27.964232   12001 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 2.280586625s
	W0503 15:28:27.964315   12001 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:27.971539   12001 out.go:177] * Deleting "default-k8s-diff-port-349000" in qemu2 ...
	W0503 15:28:28.003835   12001 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:28.003869   12001 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:33.005995   12001 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:33.006500   12001 start.go:364] duration metric: took 360.959µs to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0503 15:28:33.006656   12001 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:33.007273   12001 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:33.016164   12001 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:33.065566   12001 start.go:159] libmachine.API.Create for "default-k8s-diff-port-349000" (driver="qemu2")
	I0503 15:28:33.065612   12001 client.go:168] LocalClient.Create starting
	I0503 15:28:33.065693   12001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:33.065737   12001 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:33.065750   12001 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:33.065807   12001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:33.065835   12001 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:33.065850   12001 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:33.066490   12001 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:33.220061   12001 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:33.248936   12001 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:33.248940   12001 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:33.249095   12001 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:33.261821   12001 main.go:141] libmachine: STDOUT: 
	I0503 15:28:33.261846   12001 main.go:141] libmachine: STDERR: 
	I0503 15:28:33.261905   12001 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2 +20000M
	I0503 15:28:33.273025   12001 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:33.273042   12001 main.go:141] libmachine: STDERR: 
	I0503 15:28:33.273059   12001 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:33.273064   12001 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:33.273094   12001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c8:80:c1:24:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:33.274744   12001 main.go:141] libmachine: STDOUT: 
	I0503 15:28:33.274761   12001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:33.274779   12001 client.go:171] duration metric: took 209.168042ms to LocalClient.Create
	I0503 15:28:35.276926   12001 start.go:128] duration metric: took 2.269685125s to createHost
	I0503 15:28:35.276989   12001 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 2.27053125s
	W0503 15:28:35.277313   12001 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:35.290958   12001 out.go:177] 
	W0503 15:28:35.298085   12001 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:35.298108   12001 out.go:239] * 
	* 
	W0503 15:28:35.300558   12001 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:35.312876   12001 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (67.721917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-347000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-347000 create -f testdata/busybox.yaml: exit status 1 (29.007917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-347000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-347000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (31.181417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (31.082334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-347000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-347000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-347000 describe deploy/metrics-server -n kube-system: exit status 1 (26.593ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-347000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-347000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (39.602167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (6.569441s)

                                                
                                                
-- stdout --
	* [embed-certs-347000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	* Restarting existing qemu2 VM for "embed-certs-347000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-347000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:33.834762   12051 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:33.834890   12051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:33.834894   12051 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:33.834896   12051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:33.835043   12051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:33.836034   12051 out.go:298] Setting JSON to false
	I0503 15:28:33.851937   12051 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5284,"bootTime":1714770029,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:33.852005   12051 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:33.857223   12051 out.go:177] * [embed-certs-347000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:33.864242   12051 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:33.864286   12051 notify.go:220] Checking for updates...
	I0503 15:28:33.871153   12051 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:33.874198   12051 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:33.877216   12051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:33.880180   12051 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:33.883164   12051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:33.886427   12051 config.go:182] Loaded profile config "embed-certs-347000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:33.886695   12051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:33.894132   12051 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:28:33.901108   12051 start.go:297] selected driver: qemu2
	I0503 15:28:33.901114   12051 start.go:901] validating driver "qemu2" against &{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:33.901162   12051 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:33.903611   12051 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:33.903690   12051 cni.go:84] Creating CNI manager for ""
	I0503 15:28:33.903698   12051 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:33.903716   12051 start.go:340] cluster config:
	{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:embed-certs-347000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:33.908234   12051 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:33.916145   12051 out.go:177] * Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	I0503 15:28:33.920208   12051 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:33.920230   12051 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:33.920236   12051 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:33.920309   12051 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:33.920314   12051 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:28:33.920373   12051 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/embed-certs-347000/config.json ...
	I0503 15:28:33.920888   12051 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:35.277182   12051 start.go:364] duration metric: took 1.356308208s to acquireMachinesLock for "embed-certs-347000"
	I0503 15:28:35.277348   12051 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:35.277388   12051 fix.go:54] fixHost starting: 
	I0503 15:28:35.278128   12051 fix.go:112] recreateIfNeeded on embed-certs-347000: state=Stopped err=<nil>
	W0503 15:28:35.278180   12051 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:35.294991   12051 out.go:177] * Restarting existing qemu2 VM for "embed-certs-347000" ...
	I0503 15:28:35.302203   12051 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ce:70:9c:d6:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:35.312527   12051 main.go:141] libmachine: STDOUT: 
	I0503 15:28:35.312632   12051 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:35.312791   12051 fix.go:56] duration metric: took 35.3945ms for fixHost
	I0503 15:28:35.312821   12051 start.go:83] releasing machines lock for "embed-certs-347000", held for 35.604208ms
	W0503 15:28:35.312886   12051 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:35.313067   12051 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:35.313082   12051 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:40.315187   12051 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:40.315781   12051 start.go:364] duration metric: took 449.791µs to acquireMachinesLock for "embed-certs-347000"
	I0503 15:28:40.315987   12051 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:40.316014   12051 fix.go:54] fixHost starting: 
	I0503 15:28:40.316825   12051 fix.go:112] recreateIfNeeded on embed-certs-347000: state=Stopped err=<nil>
	W0503 15:28:40.316853   12051 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:40.322574   12051 out.go:177] * Restarting existing qemu2 VM for "embed-certs-347000" ...
	I0503 15:28:40.327423   12051 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ce:70:9c:d6:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/embed-certs-347000/disk.qcow2
	I0503 15:28:40.336849   12051 main.go:141] libmachine: STDOUT: 
	I0503 15:28:40.336927   12051 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:40.337031   12051 fix.go:56] duration metric: took 21.022ms for fixHost
	I0503 15:28:40.337059   12051 start.go:83] releasing machines lock for "embed-certs-347000", held for 21.193459ms
	W0503 15:28:40.337316   12051 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:40.345383   12051 out.go:177] 
	W0503 15:28:40.348408   12051 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:40.348458   12051 out.go:239] * 
	* 
	W0503 15:28:40.351033   12051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:40.360473   12051 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (68.634458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-349000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349000 create -f testdata/busybox.yaml: exit status 1 (29.191125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-349000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-349000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (30.698375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (30.638333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-349000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-349000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349000 describe deploy/metrics-server -n kube-system: exit status 1 (26.760417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-349000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-349000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (30.9675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.189823666s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-349000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:38.785908   12100 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:38.786031   12100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:38.786033   12100 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:38.786035   12100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:38.786163   12100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:38.787197   12100 out.go:298] Setting JSON to false
	I0503 15:28:38.803057   12100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5289,"bootTime":1714770029,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:38.803118   12100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:38.806994   12100 out.go:177] * [default-k8s-diff-port-349000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:38.813997   12100 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:38.814074   12100 notify.go:220] Checking for updates...
	I0503 15:28:38.816930   12100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:38.820922   12100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:38.824027   12100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:38.827013   12100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:38.830044   12100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:38.833301   12100 config.go:182] Loaded profile config "default-k8s-diff-port-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:38.833571   12100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:38.837953   12100 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:28:38.845038   12100 start.go:297] selected driver: qemu2
	I0503 15:28:38.845045   12100 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:38.845098   12100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:38.847332   12100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0503 15:28:38.847370   12100 cni.go:84] Creating CNI manager for ""
	I0503 15:28:38.847378   12100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:38.847405   12100 start.go:340] cluster config:
	{Name:default-k8s-diff-port-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-349000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:38.851577   12100 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:38.860029   12100 out.go:177] * Starting "default-k8s-diff-port-349000" primary control-plane node in "default-k8s-diff-port-349000" cluster
	I0503 15:28:38.863838   12100 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:38.863850   12100 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:38.863861   12100 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:38.863910   12100 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:38.863916   12100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:28:38.863967   12100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/default-k8s-diff-port-349000/config.json ...
	I0503 15:28:38.864504   12100 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:38.864534   12100 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0503 15:28:38.864544   12100 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:38.864552   12100 fix.go:54] fixHost starting: 
	I0503 15:28:38.864662   12100 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349000: state=Stopped err=<nil>
	W0503 15:28:38.864670   12100 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:38.868933   12100 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	I0503 15:28:38.876959   12100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c8:80:c1:24:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:38.878891   12100 main.go:141] libmachine: STDOUT: 
	I0503 15:28:38.878910   12100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:38.878941   12100 fix.go:56] duration metric: took 14.389458ms for fixHost
	I0503 15:28:38.878946   12100 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 14.407666ms
	W0503 15:28:38.878952   12100 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:38.878992   12100 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:38.878997   12100 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:43.881044   12100 start.go:360] acquireMachinesLock for default-k8s-diff-port-349000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:43.881527   12100 start.go:364] duration metric: took 367.583µs to acquireMachinesLock for "default-k8s-diff-port-349000"
	I0503 15:28:43.881641   12100 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:43.881663   12100 fix.go:54] fixHost starting: 
	I0503 15:28:43.882493   12100 fix.go:112] recreateIfNeeded on default-k8s-diff-port-349000: state=Stopped err=<nil>
	W0503 15:28:43.882520   12100 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:43.897099   12100 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-349000" ...
	I0503 15:28:43.901179   12100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c8:80:c1:24:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/default-k8s-diff-port-349000/disk.qcow2
	I0503 15:28:43.909233   12100 main.go:141] libmachine: STDOUT: 
	I0503 15:28:43.909296   12100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:43.909354   12100 fix.go:56] duration metric: took 27.696708ms for fixHost
	I0503 15:28:43.909374   12100 start.go:83] releasing machines lock for "default-k8s-diff-port-349000", held for 27.825584ms
	W0503 15:28:43.909582   12100 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-349000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:43.917931   12100 out.go:177] 
	W0503 15:28:43.921039   12100 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:43.921064   12100 out.go:239] * 
	* 
	W0503 15:28:43.922382   12100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:43.931718   12100 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-349000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (72.12175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-347000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (32.797208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-347000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-347000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-347000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.4045ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-347000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-347000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (30.923208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-347000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (31.083958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-347000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-347000 --alsologtostderr -v=1: exit status 83 (42.432375ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-347000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-347000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:40.636474   12124 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:40.636630   12124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:40.636633   12124 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:40.636636   12124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:40.636748   12124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:40.636950   12124 out.go:298] Setting JSON to false
	I0503 15:28:40.636958   12124 mustload.go:65] Loading cluster: embed-certs-347000
	I0503 15:28:40.637131   12124 config.go:182] Loaded profile config "embed-certs-347000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:40.640878   12124 out.go:177] * The control-plane node embed-certs-347000 host is not running: state=Stopped
	I0503 15:28:40.644591   12124 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-347000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-347000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (30.864042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (30.863625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (9.832578584s)

                                                
                                                
-- stdout --
	* [newest-cni-367000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-367000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:41.094294   12147 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:41.094411   12147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:41.094416   12147 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:41.094418   12147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:41.094565   12147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:41.095634   12147 out.go:298] Setting JSON to false
	I0503 15:28:41.111782   12147 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5292,"bootTime":1714770029,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:41.111846   12147 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:41.117167   12147 out.go:177] * [newest-cni-367000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:41.122200   12147 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:41.125177   12147 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:41.122255   12147 notify.go:220] Checking for updates...
	I0503 15:28:41.131080   12147 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:41.134121   12147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:41.135540   12147 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:41.139096   12147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:41.142495   12147 config.go:182] Loaded profile config "default-k8s-diff-port-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:41.142557   12147 config.go:182] Loaded profile config "multinode-952000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:41.142606   12147 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:41.146945   12147 out.go:177] * Using the qemu2 driver based on user configuration
	I0503 15:28:41.154119   12147 start.go:297] selected driver: qemu2
	I0503 15:28:41.154125   12147 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:28:41.154130   12147 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:41.156388   12147 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0503 15:28:41.156416   12147 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0503 15:28:41.164090   12147 out.go:177] * Automatically selected the socket_vmnet network
	I0503 15:28:41.167191   12147 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0503 15:28:41.167230   12147 cni.go:84] Creating CNI manager for ""
	I0503 15:28:41.167244   12147 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:41.167248   12147 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:28:41.167286   12147 start.go:340] cluster config:
	{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:41.171825   12147 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:41.179110   12147 out.go:177] * Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	I0503 15:28:41.183156   12147 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:41.183172   12147 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:41.183178   12147 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:41.183237   12147 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:41.183241   12147 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:28:41.183317   12147 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/newest-cni-367000/config.json ...
	I0503 15:28:41.183328   12147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/newest-cni-367000/config.json: {Name:mk8dee811ee10b3c7d73ec7d4d4a3783ad28e96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:28:41.183774   12147 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:41.183806   12147 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "newest-cni-367000"
	I0503 15:28:41.183818   12147 start.go:93] Provisioning new machine with config: &{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:41.183846   12147 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:41.193104   12147 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:41.210217   12147 start.go:159] libmachine.API.Create for "newest-cni-367000" (driver="qemu2")
	I0503 15:28:41.210246   12147 client.go:168] LocalClient.Create starting
	I0503 15:28:41.210303   12147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:41.210333   12147 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:41.210342   12147 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:41.210378   12147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:41.210400   12147 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:41.210406   12147 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:41.210862   12147 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:41.354796   12147 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:41.440656   12147 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:41.440661   12147 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:41.440832   12147 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:41.453741   12147 main.go:141] libmachine: STDOUT: 
	I0503 15:28:41.453762   12147 main.go:141] libmachine: STDERR: 
	I0503 15:28:41.453810   12147 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2 +20000M
	I0503 15:28:41.464739   12147 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:41.464761   12147 main.go:141] libmachine: STDERR: 
	I0503 15:28:41.464772   12147 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:41.464777   12147 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:41.464804   12147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:51:9d:ea:6c:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:41.466515   12147 main.go:141] libmachine: STDOUT: 
	I0503 15:28:41.466531   12147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:41.466550   12147 client.go:171] duration metric: took 256.306417ms to LocalClient.Create
	I0503 15:28:43.468672   12147 start.go:128] duration metric: took 2.284868833s to createHost
	I0503 15:28:43.468793   12147 start.go:83] releasing machines lock for "newest-cni-367000", held for 2.284995917s
	W0503 15:28:43.468850   12147 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:43.475713   12147 out.go:177] * Deleting "newest-cni-367000" in qemu2 ...
	W0503 15:28:43.501604   12147 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:43.501645   12147 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:48.503774   12147 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:48.504390   12147 start.go:364] duration metric: took 469.667µs to acquireMachinesLock for "newest-cni-367000"
	I0503 15:28:48.504548   12147 start.go:93] Provisioning new machine with config: &{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0503 15:28:48.504829   12147 start.go:125] createHost starting for "" (driver="qemu2")
	I0503 15:28:48.513495   12147 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0503 15:28:48.561619   12147 start.go:159] libmachine.API.Create for "newest-cni-367000" (driver="qemu2")
	I0503 15:28:48.561677   12147 client.go:168] LocalClient.Create starting
	I0503 15:28:48.561817   12147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/ca.pem
	I0503 15:28:48.561879   12147 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:48.561896   12147 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:48.561956   12147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18793-7269/.minikube/certs/cert.pem
	I0503 15:28:48.562001   12147 main.go:141] libmachine: Decoding PEM data...
	I0503 15:28:48.562025   12147 main.go:141] libmachine: Parsing certificate...
	I0503 15:28:48.562792   12147 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso...
	I0503 15:28:48.717507   12147 main.go:141] libmachine: Creating SSH key...
	I0503 15:28:48.821996   12147 main.go:141] libmachine: Creating Disk image...
	I0503 15:28:48.822006   12147 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0503 15:28:48.822170   12147 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:48.834919   12147 main.go:141] libmachine: STDOUT: 
	I0503 15:28:48.834940   12147 main.go:141] libmachine: STDERR: 
	I0503 15:28:48.834987   12147 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2 +20000M
	I0503 15:28:48.845882   12147 main.go:141] libmachine: STDOUT: Image resized.
	
	I0503 15:28:48.845900   12147 main.go:141] libmachine: STDERR: 
	I0503 15:28:48.845912   12147 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:48.845917   12147 main.go:141] libmachine: Starting QEMU VM...
	I0503 15:28:48.845963   12147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:8a:4d:26:9c:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:48.847741   12147 main.go:141] libmachine: STDOUT: 
	I0503 15:28:48.847757   12147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:48.847771   12147 client.go:171] duration metric: took 286.095791ms to LocalClient.Create
	I0503 15:28:50.849906   12147 start.go:128] duration metric: took 2.3451135s to createHost
	I0503 15:28:50.849996   12147 start.go:83] releasing machines lock for "newest-cni-367000", held for 2.345646583s
	W0503 15:28:50.850451   12147 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:50.864004   12147 out.go:177] 
	W0503 15:28:50.867240   12147 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:50.867285   12147 out.go:239] * 
	* 
	W0503 15:28:50.870048   12147 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:50.881910   12147 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (69.936791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-349000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (34.342375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-349000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-349000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-349000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.781583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-349000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-349000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (31.16075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-349000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (30.827458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-349000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-349000 --alsologtostderr -v=1: exit status 83 (44.807541ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-349000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-349000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:44.215444   12178 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:44.215600   12178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:44.215604   12178 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:44.215606   12178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:44.215721   12178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:44.215931   12178 out.go:298] Setting JSON to false
	I0503 15:28:44.215939   12178 mustload.go:65] Loading cluster: default-k8s-diff-port-349000
	I0503 15:28:44.216132   12178 config.go:182] Loaded profile config "default-k8s-diff-port-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:44.220603   12178 out.go:177] * The control-plane node default-k8s-diff-port-349000 host is not running: state=Stopped
	I0503 15:28:44.224576   12178 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-349000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-349000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (30.90925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (31.213459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0: exit status 80 (5.185066458s)

                                                
                                                
-- stdout --
	* [newest-cni-367000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	* Restarting existing qemu2 VM for "newest-cni-367000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-367000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:54.332183   12244 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:54.332319   12244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:54.332322   12244 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:54.332324   12244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:54.332461   12244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:54.333453   12244 out.go:298] Setting JSON to false
	I0503 15:28:54.349442   12244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5305,"bootTime":1714770029,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:28:54.349519   12244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:28:54.352932   12244 out.go:177] * [newest-cni-367000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:28:54.361002   12244 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:28:54.361061   12244 notify.go:220] Checking for updates...
	I0503 15:28:54.366013   12244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:28:54.369945   12244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:28:54.373027   12244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:28:54.375963   12244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:28:54.379002   12244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:28:54.382336   12244 config.go:182] Loaded profile config "newest-cni-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:54.382580   12244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:28:54.386946   12244 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:28:54.396009   12244 start.go:297] selected driver: qemu2
	I0503 15:28:54.396018   12244 start.go:901] validating driver "qemu2" against &{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:newest-cni-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:54.396062   12244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:28:54.398297   12244 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0503 15:28:54.398339   12244 cni.go:84] Creating CNI manager for ""
	I0503 15:28:54.398347   12244 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:28:54.398394   12244 start.go:340] cluster config:
	{Name:newest-cni-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-367000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:28:54.402749   12244 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:28:54.410988   12244 out.go:177] * Starting "newest-cni-367000" primary control-plane node in "newest-cni-367000" cluster
	I0503 15:28:54.414919   12244 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:28:54.414931   12244 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:28:54.414939   12244 cache.go:56] Caching tarball of preloaded images
	I0503 15:28:54.414990   12244 preload.go:173] Found /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0503 15:28:54.414995   12244 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:28:54.415050   12244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/newest-cni-367000/config.json ...
	I0503 15:28:54.415560   12244 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:54.415587   12244 start.go:364] duration metric: took 20.542µs to acquireMachinesLock for "newest-cni-367000"
	I0503 15:28:54.415595   12244 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:54.415602   12244 fix.go:54] fixHost starting: 
	I0503 15:28:54.415707   12244 fix.go:112] recreateIfNeeded on newest-cni-367000: state=Stopped err=<nil>
	W0503 15:28:54.415716   12244 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:54.420046   12244 out.go:177] * Restarting existing qemu2 VM for "newest-cni-367000" ...
	I0503 15:28:54.427016   12244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:8a:4d:26:9c:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:54.429099   12244 main.go:141] libmachine: STDOUT: 
	I0503 15:28:54.429116   12244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:54.429144   12244 fix.go:56] duration metric: took 13.542208ms for fixHost
	I0503 15:28:54.429150   12244 start.go:83] releasing machines lock for "newest-cni-367000", held for 13.560375ms
	W0503 15:28:54.429156   12244 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:54.429183   12244 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:54.429187   12244 start.go:728] Will try again in 5 seconds ...
	I0503 15:28:59.430342   12244 start.go:360] acquireMachinesLock for newest-cni-367000: {Name:mk75a6a65f97ac1ce21c567594b284ec6b0a9ff6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0503 15:28:59.430767   12244 start.go:364] duration metric: took 316.792µs to acquireMachinesLock for "newest-cni-367000"
	I0503 15:28:59.430934   12244 start.go:96] Skipping create...Using existing machine configuration
	I0503 15:28:59.430956   12244 fix.go:54] fixHost starting: 
	I0503 15:28:59.431679   12244 fix.go:112] recreateIfNeeded on newest-cni-367000: state=Stopped err=<nil>
	W0503 15:28:59.431706   12244 fix.go:138] unexpected machine state, will restart: <nil>
	I0503 15:28:59.436239   12244 out.go:177] * Restarting existing qemu2 VM for "newest-cni-367000" ...
	I0503 15:28:59.440446   12244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:8a:4d:26:9c:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18793-7269/.minikube/machines/newest-cni-367000/disk.qcow2
	I0503 15:28:59.450339   12244 main.go:141] libmachine: STDOUT: 
	I0503 15:28:59.450404   12244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0503 15:28:59.450516   12244 fix.go:56] duration metric: took 19.565292ms for fixHost
	I0503 15:28:59.450535   12244 start.go:83] releasing machines lock for "newest-cni-367000", held for 19.746167ms
	W0503 15:28:59.450662   12244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-367000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0503 15:28:59.458202   12244 out.go:177] 
	W0503 15:28:59.462160   12244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0503 15:28:59.462177   12244 out.go:239] * 
	* 
	W0503 15:28:59.463978   12244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:28:59.473128   12244 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-367000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (70.948875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-367000 image list --format=json
start_stop_delete_test.go:304: v1.30.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (32.700125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-367000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-367000 --alsologtostderr -v=1: exit status 83 (45.786875ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-367000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-367000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:28:59.668265   12261 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:28:59.668423   12261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:59.668426   12261 out.go:304] Setting ErrFile to fd 2...
	I0503 15:28:59.668428   12261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:28:59.668546   12261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:28:59.668761   12261 out.go:298] Setting JSON to false
	I0503 15:28:59.668769   12261 mustload.go:65] Loading cluster: newest-cni-367000
	I0503 15:28:59.668981   12261 config.go:182] Loaded profile config "newest-cni-367000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:28:59.673558   12261 out.go:177] * The control-plane node newest-cni-367000 host is not running: state=Stopped
	I0503 15:28:59.677414   12261 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-367000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-367000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (32.60425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (32.533708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.0/json-events 6.43
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.08
18 TestDownloadOnly/v1.30.0/DeleteAll 0.23
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.43
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 10.85
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.16
55 TestFunctional/serial/CacheCmd/cache/add_local 1.21
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.23
71 TestFunctional/parallel/DryRun 0.22
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.2
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
107 TestFunctional/parallel/ProfileCmd/profile_list 0.11
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
112 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/ImageCommands/Setup 2.02
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.17
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.43
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.33
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 0.99
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.47
258 TestNoKubernetes/serial/Stop 3.38
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
275 TestStartStop/group/old-k8s-version/serial/Stop 3.54
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 3.35
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 1.77
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.02
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.14
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-988000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-988000: exit status 85 (104.45825ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |          |
	|         | -p download-only-988000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 15:02:45
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 15:02:45.891216    7770 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:02:45.891365    7770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:02:45.891369    7770 out.go:304] Setting ErrFile to fd 2...
	I0503 15:02:45.891371    7770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:02:45.891501    7770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	W0503 15:02:45.891572    7770 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18793-7269/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18793-7269/.minikube/config/config.json: no such file or directory
	I0503 15:02:45.892883    7770 out.go:298] Setting JSON to true
	I0503 15:02:45.909819    7770 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3736,"bootTime":1714770029,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:02:45.909881    7770 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:02:45.915674    7770 out.go:97] [download-only-988000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:02:45.919513    7770 out.go:169] MINIKUBE_LOCATION=18793
	I0503 15:02:45.915802    7770 notify.go:220] Checking for updates...
	W0503 15:02:45.915840    7770 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball: no such file or directory
	I0503 15:02:45.926858    7770 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:02:45.929614    7770 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:02:45.932613    7770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:02:45.935558    7770 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	W0503 15:02:45.941549    7770 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0503 15:02:45.941751    7770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:02:45.945510    7770 out.go:97] Using the qemu2 driver based on user configuration
	I0503 15:02:45.945529    7770 start.go:297] selected driver: qemu2
	I0503 15:02:45.945544    7770 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:02:45.945631    7770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:02:45.948542    7770 out.go:169] Automatically selected the socket_vmnet network
	I0503 15:02:45.952047    7770 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0503 15:02:45.952142    7770 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 15:02:45.952219    7770 cni.go:84] Creating CNI manager for ""
	I0503 15:02:45.952242    7770 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0503 15:02:45.952301    7770 start.go:340] cluster config:
	{Name:download-only-988000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-988000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:02:45.957186    7770 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:02:45.961533    7770 out.go:97] Downloading VM boot image ...
	I0503 15:02:45.961568    7770 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/iso/arm64/minikube-v1.33.0-1714498396-18779-arm64.iso
	I0503 15:02:50.349608    7770 out.go:97] Starting "download-only-988000" primary control-plane node in "download-only-988000" cluster
	I0503 15:02:50.349626    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:02:50.405734    7770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:02:50.405742    7770 cache.go:56] Caching tarball of preloaded images
	I0503 15:02:50.406028    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:02:50.410637    7770 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0503 15:02:50.410644    7770 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:50.493861    7770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0503 15:02:55.763961    7770 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:55.764096    7770 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:56.460696    7770 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0503 15:02:56.460882    7770 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/download-only-988000/config.json ...
	I0503 15:02:56.460898    7770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/download-only-988000/config.json: {Name:mk775e79f8473633e2d533f46469ccfa5d2255cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:02:56.461592    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0503 15:02:56.461869    7770 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0503 15:02:56.925369    7770 out.go:169] 
	W0503 15:02:56.931451    7770 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00 0x106e4ce00] Decompressors:map[bz2:0x14000765020 gz:0x14000765028 tar:0x14000764fd0 tar.bz2:0x14000764fe0 tar.gz:0x14000764ff0 tar.xz:0x14000765000 tar.zst:0x14000765010 tbz2:0x14000764fe0 tgz:0x14000764ff0 txz:0x14000765000 tzst:0x14000765010 xz:0x14000765030 zip:0x14000765040 zst:0x14000765038] Getters:map[file:0x140021a4560 http:0x1400070a190 https:0x1400070a1e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0503 15:02:56.931480    7770 out_reason.go:110] 
	W0503 15:02:56.943377    7770 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0503 15:02:56.947276    7770 out.go:169] 
	
	
	* The control-plane node download-only-988000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-988000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-988000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (6.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-819000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-819000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=qemu2 : (6.430396375s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (6.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-819000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-819000: exit status 85 (81.087667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
	|         | -p download-only-988000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
	| delete  | -p download-only-988000        | download-only-988000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT | 03 May 24 15:02 PDT |
	| start   | -o=json --download-only        | download-only-819000 | jenkins | v1.33.0 | 03 May 24 15:02 PDT |                     |
	|         | -p download-only-819000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/03 15:02:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0503 15:02:57.625578    7804 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:02:57.625707    7804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:02:57.625711    7804 out.go:304] Setting ErrFile to fd 2...
	I0503 15:02:57.625713    7804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:02:57.625843    7804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:02:57.627083    7804 out.go:298] Setting JSON to true
	I0503 15:02:57.645447    7804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3748,"bootTime":1714770029,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:02:57.645504    7804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:02:57.650773    7804 out.go:97] [download-only-819000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:02:57.650838    7804 notify.go:220] Checking for updates...
	I0503 15:02:57.654750    7804 out.go:169] MINIKUBE_LOCATION=18793
	I0503 15:02:57.657710    7804 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:02:57.661709    7804 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:02:57.664738    7804 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:02:57.667644    7804 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	W0503 15:02:57.673725    7804 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0503 15:02:57.673874    7804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:02:57.676776    7804 out.go:97] Using the qemu2 driver based on user configuration
	I0503 15:02:57.676787    7804 start.go:297] selected driver: qemu2
	I0503 15:02:57.676791    7804 start.go:901] validating driver "qemu2" against <nil>
	I0503 15:02:57.676865    7804 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0503 15:02:57.679660    7804 out.go:169] Automatically selected the socket_vmnet network
	I0503 15:02:57.684774    7804 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0503 15:02:57.684916    7804 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0503 15:02:57.684949    7804 cni.go:84] Creating CNI manager for ""
	I0503 15:02:57.684959    7804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0503 15:02:57.684964    7804 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0503 15:02:57.684998    7804 start.go:340] cluster config:
	{Name:download-only-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:02:57.689416    7804 iso.go:125] acquiring lock: {Name:mk0ad6b95f1bb51c6112b561369aaae11c537b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0503 15:02:57.692671    7804 out.go:97] Starting "download-only-819000" primary control-plane node in "download-only-819000" cluster
	I0503 15:02:57.692677    7804 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:02:57.747813    7804 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:02:57.747834    7804 cache.go:56] Caching tarball of preloaded images
	I0503 15:02:57.747988    7804 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:02:57.752168    7804 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0503 15:02:57.752175    7804 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:02:57.825222    7804 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0503 15:03:02.021586    7804 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:03:02.021786    7804 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0503 15:03:02.564841    7804 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0503 15:03:02.565045    7804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/download-only-819000/config.json ...
	I0503 15:03:02.565062    7804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18793-7269/.minikube/profiles/download-only-819000/config.json: {Name:mke598813d73cb4f5e2dd8aadfd7cab041047444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0503 15:03:02.565312    7804 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0503 15:03:02.565429    7804 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18793-7269/.minikube/cache/darwin/arm64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-819000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-819000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-819000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-919000 --alsologtostderr --binary-mirror http://127.0.0.1:50954 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-919000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-919000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-379000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-379000: exit status 85 (59.330708ms)

                                                
                                                
-- stdout --
	* Profile "addons-379000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-379000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-379000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-379000: exit status 85 (63.09025ms)

                                                
                                                
-- stdout --
	* Profile "addons-379000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-379000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.43s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status: exit status 7 (32.868458ms)

                                                
                                                
-- stdout --
	nospam-618000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status: exit status 7 (32.029625ms)

                                                
                                                
-- stdout --
	nospam-618000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status: exit status 7 (32.317333ms)

                                                
                                                
-- stdout --
	nospam-618000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause: exit status 83 (42.846666ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-618000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause: exit status 83 (40.53225ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-618000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause: exit status 83 (41.159375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-618000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause: exit status 83 (41.815625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-618000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause: exit status 83 (41.923166ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-618000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause: exit status 83 (41.825084ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-618000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-618000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (10.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 stop: (3.379315583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 stop: (3.704887417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-618000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-618000 stop: (3.76313525s)
--- PASS: TestErrorSpam/stop (10.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18793-7269/.minikube/files/etc/test/nested/copy/7768/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-353000 cache add registry.k8s.io/pause:3.1: (1.11631575s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-353000 cache add registry.k8s.io/pause:3.3: (1.085628125s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local4257098861/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cache add minikube-local-cache-test:functional-353000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 cache delete minikube-local-cache-test:functional-353000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-353000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 config get cpus: exit status 14 (33.634083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 config get cpus: exit status 14 (38.222541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-353000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-353000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (113.626458ms)

                                                
                                                
-- stdout --
	* [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:04:46.614357    8317 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:04:46.614474    8317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:46.614478    8317 out.go:304] Setting ErrFile to fd 2...
	I0503 15:04:46.614480    8317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:46.614603    8317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:04:46.615620    8317 out.go:298] Setting JSON to false
	I0503 15:04:46.631684    8317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3857,"bootTime":1714770029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:04:46.631758    8317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:04:46.636362    8317 out.go:177] * [functional-353000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	I0503 15:04:46.643545    8317 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:04:46.643605    8317 notify.go:220] Checking for updates...
	I0503 15:04:46.647397    8317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:04:46.650527    8317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:04:46.653466    8317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:04:46.656531    8317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:04:46.659469    8317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:04:46.662952    8317 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:04:46.663215    8317 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:04:46.667505    8317 out.go:177] * Using the qemu2 driver based on existing profile
	I0503 15:04:46.674519    8317 start.go:297] selected driver: qemu2
	I0503 15:04:46.674527    8317 start.go:901] validating driver "qemu2" against &{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:04:46.674589    8317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:04:46.679501    8317 out.go:177] 
	W0503 15:04:46.683478    8317 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0503 15:04:46.686490    8317 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-353000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-353000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-353000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.402209ms)

                                                
                                                
-- stdout --
	* [functional-353000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0503 15:04:46.492818    8313 out.go:291] Setting OutFile to fd 1 ...
	I0503 15:04:46.492925    8313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:46.492928    8313 out.go:304] Setting ErrFile to fd 2...
	I0503 15:04:46.492930    8313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0503 15:04:46.493058    8313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18793-7269/.minikube/bin
	I0503 15:04:46.494498    8313 out.go:298] Setting JSON to false
	I0503 15:04:46.511210    8313 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3857,"bootTime":1714770029,"procs":492,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0503 15:04:46.511290    8313 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0503 15:04:46.517587    8313 out.go:177] * [functional-353000] minikube v1.33.0 sur Darwin 14.4.1 (arm64)
	I0503 15:04:46.523553    8313 out.go:177]   - MINIKUBE_LOCATION=18793
	I0503 15:04:46.523624    8313 notify.go:220] Checking for updates...
	I0503 15:04:46.532540    8313 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	I0503 15:04:46.535473    8313 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0503 15:04:46.538505    8313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0503 15:04:46.541539    8313 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	I0503 15:04:46.544532    8313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0503 15:04:46.547834    8313 config.go:182] Loaded profile config "functional-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0503 15:04:46.548093    8313 driver.go:392] Setting default libvirt URI to qemu:///system
	I0503 15:04:46.552484    8313 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0503 15:04:46.559484    8313 start.go:297] selected driver: qemu2
	I0503 15:04:46.559491    8313 start.go:901] validating driver "qemu2" against &{Name:functional-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:functional-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0503 15:04:46.559543    8313 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0503 15:04:46.565471    8313 out.go:177] 
	W0503 15:04:46.569507    8313 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0503 15:04:46.573494    8313 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "71.777708ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "36.535583ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "72.11475ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.735792ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.978060125s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-353000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image rm gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-353000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 image save --daemon gcr.io/google-containers/addon-resizer:functional-353000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-353000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012050625s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-353000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-353000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-353000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-353000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-666000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-666000 --output=json --user=testUser: (3.427907708s)
--- PASS: TestJSONOutput/stop/Command (3.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-144000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-144000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.02675ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f9b44231-b525-4cf2-a014-b1bfa047ac87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-144000] minikube v1.33.0 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"78b5fc66-8247-4d17-a1f6-53c07a778cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18793"}}
	{"specversion":"1.0","id":"60b09cfc-1736-47bb-bb27-3f8055d07c49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig"}}
	{"specversion":"1.0","id":"78c00b61-091d-4d98-874a-e73aeb7772e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c1ed933a-7989-4829-b506-9857bbd0672d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3eb0fa00-ae4b-4b3c-a1b6-a24409c5cc3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube"}}
	{"specversion":"1.0","id":"64a9e873-5081-4fe4-866d-c131fb5ecd90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"392c6071-367a-4161-afca-35aaea606e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-144000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-144000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-626000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.600916ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-626000] minikube v1.33.0 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=18793
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18793-7269/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18793-7269/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-626000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-626000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.805791ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-626000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-626000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.655373417s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.814385292s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-626000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-626000: (3.379689417s)
--- PASS: TestNoKubernetes/serial/Stop (3.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-626000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-626000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.01875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-626000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-626000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-139000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-698000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-698000 --alsologtostderr -v=3: (3.535658459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-698000 -n old-k8s-version-698000: exit status 7 (58.445041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-698000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-315000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-315000 --alsologtostderr -v=3: (3.347992625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-315000 -n no-preload-315000: exit status 7 (46.404625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-315000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-347000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-347000 --alsologtostderr -v=3: (1.774808583s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (58.188334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-347000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-349000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-349000 --alsologtostderr -v=3: (3.021918834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-349000 -n default-k8s-diff-port-349000: exit status 7 (57.070917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-349000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-367000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-367000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-367000 --alsologtostderr -v=3: (3.1424075s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-367000 -n newest-cni-367000: exit status 7 (61.378541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-367000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2596972673/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714773846152141000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2596972673/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714773846152141000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2596972673/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714773846152141000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2596972673/001/test-1714773846152141000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.852959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.264084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.323791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.119375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.659875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.489083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.212209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo umount -f /mount-9p": exit status 83 (47.791959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2596972673/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (13.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1623117168/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.804167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.423541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.292417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.186459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.956209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.75575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.904292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "sudo umount -f /mount-9p": exit status 83 (46.823916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-353000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1623117168/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4007256778/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4007256778/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4007256778/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (77.035417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (86.253083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (85.175084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (92.7935ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (88.889375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (89.462292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (89.057916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-353000 ssh "findmnt -T" /mount1: exit status 83 (88.745833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-353000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4007256778/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4007256778/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-353000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4007256778/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.88s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-874000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-874000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-874000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-874000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-874000"

                                                
                                                
----------------------- debugLogs end: cilium-874000 [took: 2.255358125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-874000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-874000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-066000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-066000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard