Test Report: QEMU_macOS 19479

                    
                      913baf54a454bfbef3be1ea09a51779f85ec9369:2024-08-19:35854
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.74
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.99
27 TestAddons/Setup 10.78
28 TestCertOptions 10.1
29 TestCertExpiration 195.08
30 TestDockerFlags 10.11
31 TestForceSystemdFlag 10.29
32 TestForceSystemdEnv 11.75
38 TestErrorSpam/setup 9.84
47 TestFunctional/serial/StartWithProxy 10
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.78
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.05
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.13
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.04
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 110.51
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.23
141 TestMultiControlPlane/serial/StartCluster 9.93
142 TestMultiControlPlane/serial/DeployApp 80.7
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 46.78
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.22
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.45
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.07
162 TestImageBuild/serial/Setup 9.93
165 TestJSONOutput/start/Command 9.86
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.18
197 TestMountStart/serial/StartWithMountFirst 10.07
200 TestMultiNode/serial/FreshStart2Nodes 9.88
201 TestMultiNode/serial/DeployApp2Nodes 110.14
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 56.37
209 TestMultiNode/serial/RestartKeepsNodes 9.06
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.55
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 21.53
217 TestPreload 10.12
219 TestScheduledStopUnix 10.15
220 TestSkaffold 12.67
223 TestRunningBinaryUpgrade 590.16
225 TestKubernetesUpgrade 18.62
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.35
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.64
241 TestStoppedBinaryUpgrade/Upgrade 575.38
243 TestPause/serial/Start 10.03
253 TestNoKubernetes/serial/StartWithK8s 9.76
254 TestNoKubernetes/serial/StartWithStopK8s 5.32
255 TestNoKubernetes/serial/Start 5.31
259 TestNoKubernetes/serial/StartNoArgs 5.3
261 TestNetworkPlugins/group/auto/Start 9.92
262 TestNetworkPlugins/group/kindnet/Start 9.83
263 TestNetworkPlugins/group/calico/Start 9.93
264 TestNetworkPlugins/group/custom-flannel/Start 9.89
265 TestNetworkPlugins/group/false/Start 9.78
266 TestNetworkPlugins/group/enable-default-cni/Start 9.77
267 TestNetworkPlugins/group/flannel/Start 9.99
268 TestNetworkPlugins/group/bridge/Start 9.73
269 TestNetworkPlugins/group/kubenet/Start 9.88
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.76
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.21
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.88
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
288 TestStartStop/group/no-preload/serial/SecondStart 5.25
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.1
294 TestStartStop/group/embed-certs/serial/FirstStart 10.07
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 12.05
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
304 TestStartStop/group/embed-certs/serial/SecondStart 5.25
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.85
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.26
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-648000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-648000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.735256458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71260d97-c628-4fe2-b27f-448809249f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-648000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d80a614-5c2a-44ee-b19a-4da2bddcaf5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"1c6a8699-a958-41b5-9b1c-c053470a675f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig"}}
	{"specversion":"1.0","id":"9028977e-f2de-4ef7-9e1d-fa1cc1bfd901","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"98d637b6-e884-4ea1-988c-7ec6474eae11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9a8767c5-5dee-4baa-b3a5-b67ef563c86c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube"}}
	{"specversion":"1.0","id":"f038fdb7-2c1f-4b07-b989-14af52f5b346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"d6d7ad6c-7a61-4dea-9b50-48a1dd9a35c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"37f1fdca-aafa-45ea-8419-095db34aacc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dfb29f30-d2c1-4a8c-9399-bf11002b2ed5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"15e2188e-04e0-43e6-900e-6abe6a0646a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-648000\" primary control-plane node in \"download-only-648000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8144bedf-2d88-4a80-89c4-aa3614787dc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c91b18f-7791-4499-858d-66fcfb58e0e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960] Decompressors:map[bz2:0x14000894ff0 gz:0x14000894ff8 tar:0x14000894fa0 tar.bz2:0x14000894fb0 tar.gz:0x14000894fc0 tar.xz:0x14000894fd0 tar.zst:0x14000894fe0 tbz2:0x14000894fb0 tgz:0x1
4000894fc0 txz:0x14000894fd0 tzst:0x14000894fe0 xz:0x14000895000 zip:0x14000895010 zst:0x14000895008] Getters:map[file:0x1400070fcc0 http:0x14000620320 https:0x14000620370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"62d400a6-2e38-4b37-b37c-1534af0f4a5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:15:51.804289   16242 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:15:51.804436   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:51.804439   16242 out.go:358] Setting ErrFile to fd 2...
	I0819 04:15:51.804441   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:51.804586   16242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	W0819 04:15:51.804674   16242 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19479-15750/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19479-15750/.minikube/config/config.json: no such file or directory
	I0819 04:15:51.805960   16242 out.go:352] Setting JSON to true
	I0819 04:15:51.822245   16242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8119,"bootTime":1724058032,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:15:51.822315   16242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:15:51.828641   16242 out.go:97] [download-only-648000] minikube v1.33.1 on Darwin 14.5 (arm64)
	W0819 04:15:51.828807   16242 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 04:15:51.828818   16242 notify.go:220] Checking for updates...
	I0819 04:15:51.832531   16242 out.go:169] MINIKUBE_LOCATION=19479
	I0819 04:15:51.835537   16242 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:15:51.838657   16242 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:15:51.841589   16242 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:15:51.845543   16242 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	W0819 04:15:51.851549   16242 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 04:15:51.851796   16242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:15:51.854501   16242 out.go:97] Using the qemu2 driver based on user configuration
	I0819 04:15:51.854524   16242 start.go:297] selected driver: qemu2
	I0819 04:15:51.854540   16242 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:15:51.854626   16242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:15:51.857540   16242 out.go:169] Automatically selected the socket_vmnet network
	I0819 04:15:51.863715   16242 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 04:15:51.863809   16242 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:15:51.863896   16242 cni.go:84] Creating CNI manager for ""
	I0819 04:15:51.863912   16242 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:15:51.863958   16242 start.go:340] cluster config:
	{Name:download-only-648000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:15:51.867726   16242 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:15:51.870528   16242 out.go:97] Downloading VM boot image ...
	I0819 04:15:51.870564   16242 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 04:15:56.559796   16242 out.go:97] Starting "download-only-648000" primary control-plane node in "download-only-648000" cluster
	I0819 04:15:56.559814   16242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:15:56.621551   16242 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:15:56.621571   16242 cache.go:56] Caching tarball of preloaded images
	I0819 04:15:56.621740   16242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:15:56.626847   16242 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 04:15:56.626854   16242 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:15:56.722295   16242 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:16:02.443757   16242 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:16:02.444141   16242 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:16:03.139481   16242 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:16:03.139661   16242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/download-only-648000/config.json ...
	I0819 04:16:03.139677   16242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/download-only-648000/config.json: {Name:mkee9fb3453e616fe0a206e2298a15c750642a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:16:03.139903   16242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:16:03.140104   16242 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 04:16:03.462041   16242 out.go:193] 
	W0819 04:16:03.467264   16242 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960] Decompressors:map[bz2:0x14000894ff0 gz:0x14000894ff8 tar:0x14000894fa0 tar.bz2:0x14000894fb0 tar.gz:0x14000894fc0 tar.xz:0x14000894fd0 tar.zst:0x14000894fe0 tbz2:0x14000894fb0 tgz:0x14000894fc0 txz:0x14000894fd0 tzst:0x14000894fe0 xz:0x14000895000 zip:0x14000895010 zst:0x14000895008] Getters:map[file:0x1400070fcc0 http:0x14000620320 https:0x14000620370] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 04:16:03.467285   16242 out_reason.go:110] 
	W0819 04:16:03.474157   16242 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:16:03.477148   16242 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-648000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-711000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-711000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.838425333s)

                                                
                                                
-- stdout --
	* [offline-docker-711000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-711000" primary control-plane node in "offline-docker-711000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-711000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:27:34.565162   17714 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:27:34.565299   17714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:34.565302   17714 out.go:358] Setting ErrFile to fd 2...
	I0819 04:27:34.565304   17714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:34.565434   17714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:27:34.566597   17714 out.go:352] Setting JSON to false
	I0819 04:27:34.584071   17714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8822,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:27:34.584150   17714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:27:34.588719   17714 out.go:177] * [offline-docker-711000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:27:34.596761   17714 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:27:34.596778   17714 notify.go:220] Checking for updates...
	I0819 04:27:34.602729   17714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:27:34.605701   17714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:27:34.608715   17714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:27:34.611723   17714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:27:34.614695   17714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:27:34.618094   17714 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:34.618158   17714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:27:34.621733   17714 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:27:34.628793   17714 start.go:297] selected driver: qemu2
	I0819 04:27:34.628826   17714 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:27:34.628840   17714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:27:34.630767   17714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:27:34.633729   17714 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:27:34.636752   17714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:27:34.636787   17714 cni.go:84] Creating CNI manager for ""
	I0819 04:27:34.636795   17714 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:27:34.636799   17714 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:27:34.636834   17714 start.go:340] cluster config:
	{Name:offline-docker-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:27:34.640455   17714 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:34.647731   17714 out.go:177] * Starting "offline-docker-711000" primary control-plane node in "offline-docker-711000" cluster
	I0819 04:27:34.651726   17714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:27:34.651756   17714 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:27:34.651766   17714 cache.go:56] Caching tarball of preloaded images
	I0819 04:27:34.651841   17714 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:27:34.651846   17714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:27:34.651912   17714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/offline-docker-711000/config.json ...
	I0819 04:27:34.651922   17714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/offline-docker-711000/config.json: {Name:mka4c1f99f863bb750d928df62e75c269a19ab5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:27:34.652151   17714 start.go:360] acquireMachinesLock for offline-docker-711000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:34.652189   17714 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "offline-docker-711000"
	I0819 04:27:34.652205   17714 start.go:93] Provisioning new machine with config: &{Name:offline-docker-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:34.652236   17714 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:34.656761   17714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:34.672815   17714 start.go:159] libmachine.API.Create for "offline-docker-711000" (driver="qemu2")
	I0819 04:27:34.672864   17714 client.go:168] LocalClient.Create starting
	I0819 04:27:34.672942   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:34.672971   17714 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:34.672980   17714 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:34.673027   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:34.673049   17714 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:34.673055   17714 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:34.673412   17714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:34.825579   17714 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:34.967065   17714 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:34.967073   17714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:34.967279   17714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2
	I0819 04:27:34.983935   17714 main.go:141] libmachine: STDOUT: 
	I0819 04:27:34.983957   17714 main.go:141] libmachine: STDERR: 
	I0819 04:27:34.984016   17714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2 +20000M
	I0819 04:27:34.992582   17714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:34.992600   17714 main.go:141] libmachine: STDERR: 
	I0819 04:27:34.992623   17714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2
	I0819 04:27:34.992627   17714 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:34.992644   17714 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:34.992700   17714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:54:61:4f:6e:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2
	I0819 04:27:34.994565   17714 main.go:141] libmachine: STDOUT: 
	I0819 04:27:34.994583   17714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:34.994602   17714 client.go:171] duration metric: took 321.738084ms to LocalClient.Create
	I0819 04:27:36.996385   17714 start.go:128] duration metric: took 2.344187584s to createHost
	I0819 04:27:36.996406   17714 start.go:83] releasing machines lock for "offline-docker-711000", held for 2.344262334s
	W0819 04:27:36.996416   17714 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:37.005959   17714 out.go:177] * Deleting "offline-docker-711000" in qemu2 ...
	W0819 04:27:37.019988   17714 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:37.019999   17714 start.go:729] Will try again in 5 seconds ...
	I0819 04:27:42.021975   17714 start.go:360] acquireMachinesLock for offline-docker-711000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:42.022092   17714 start.go:364] duration metric: took 94.917µs to acquireMachinesLock for "offline-docker-711000"
	I0819 04:27:42.022121   17714 start.go:93] Provisioning new machine with config: &{Name:offline-docker-711000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:42.022177   17714 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:42.032722   17714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:42.048102   17714 start.go:159] libmachine.API.Create for "offline-docker-711000" (driver="qemu2")
	I0819 04:27:42.048132   17714 client.go:168] LocalClient.Create starting
	I0819 04:27:42.048204   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:42.048243   17714 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:42.048253   17714 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:42.048290   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:42.048316   17714 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:42.048323   17714 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:42.048611   17714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:42.194010   17714 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:42.309008   17714 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:42.309019   17714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:42.309277   17714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2
	I0819 04:27:42.318951   17714 main.go:141] libmachine: STDOUT: 
	I0819 04:27:42.318974   17714 main.go:141] libmachine: STDERR: 
	I0819 04:27:42.319037   17714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2 +20000M
	I0819 04:27:42.327750   17714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:42.327769   17714 main.go:141] libmachine: STDERR: 
	I0819 04:27:42.327783   17714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2
	I0819 04:27:42.327795   17714 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:42.327804   17714 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:42.327830   17714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:8d:9d:ed:cd:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/offline-docker-711000/disk.qcow2
	I0819 04:27:42.329646   17714 main.go:141] libmachine: STDOUT: 
	I0819 04:27:42.329660   17714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:42.329673   17714 client.go:171] duration metric: took 281.54075ms to LocalClient.Create
	I0819 04:27:44.331837   17714 start.go:128] duration metric: took 2.3096845s to createHost
	I0819 04:27:44.331972   17714 start.go:83] releasing machines lock for "offline-docker-711000", held for 2.30992075s
	W0819 04:27:44.332362   17714 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-711000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:44.343020   17714 out.go:201] 
	W0819 04:27:44.346028   17714 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:27:44.346075   17714 out.go:270] * 
	* 
	W0819 04:27:44.348820   17714 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:27:44.357992   17714 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-711000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-19 04:27:44.373717 -0700 PDT m=+712.688504876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-711000 -n offline-docker-711000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-711000 -n offline-docker-711000: exit status 7 (68.656833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-711000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-711000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-711000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/Setup (10.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-939000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-939000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.778408416s)

                                                
                                                
-- stdout --
	* [addons-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-939000" primary control-plane node in "addons-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:16:13.115477   16322 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:16:13.115591   16322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:16:13.115594   16322 out.go:358] Setting ErrFile to fd 2...
	I0819 04:16:13.115596   16322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:16:13.115729   16322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:16:13.116847   16322 out.go:352] Setting JSON to false
	I0819 04:16:13.132624   16322 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8141,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:16:13.132693   16322 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:16:13.137672   16322 out.go:177] * [addons-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:16:13.144628   16322 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:16:13.144674   16322 notify.go:220] Checking for updates...
	I0819 04:16:13.151489   16322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:16:13.154598   16322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:16:13.157613   16322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:16:13.160618   16322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:16:13.163597   16322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:16:13.166760   16322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:16:13.170578   16322 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:16:13.177591   16322 start.go:297] selected driver: qemu2
	I0819 04:16:13.177600   16322 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:16:13.177606   16322 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:16:13.179771   16322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:16:13.182536   16322 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:16:13.185657   16322 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:16:13.185705   16322 cni.go:84] Creating CNI manager for ""
	I0819 04:16:13.185712   16322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:16:13.185716   16322 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:16:13.185754   16322 start.go:340] cluster config:
	{Name:addons-939000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:16:13.189399   16322 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:16:13.198593   16322 out.go:177] * Starting "addons-939000" primary control-plane node in "addons-939000" cluster
	I0819 04:16:13.202607   16322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:16:13.202621   16322 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:16:13.202630   16322 cache.go:56] Caching tarball of preloaded images
	I0819 04:16:13.202691   16322 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:16:13.202697   16322 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:16:13.202925   16322 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/addons-939000/config.json ...
	I0819 04:16:13.202936   16322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/addons-939000/config.json: {Name:mk7b567e2bb685cb1b3a1cfe024342fc3a201eeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:16:13.203274   16322 start.go:360] acquireMachinesLock for addons-939000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:16:13.203345   16322 start.go:364] duration metric: took 65.292µs to acquireMachinesLock for "addons-939000"
	I0819 04:16:13.203359   16322 start.go:93] Provisioning new machine with config: &{Name:addons-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:16:13.203391   16322 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:16:13.211568   16322 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 04:16:13.229119   16322 start.go:159] libmachine.API.Create for "addons-939000" (driver="qemu2")
	I0819 04:16:13.229143   16322 client.go:168] LocalClient.Create starting
	I0819 04:16:13.229274   16322 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:16:13.382530   16322 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:16:13.580728   16322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:16:14.333920   16322 main.go:141] libmachine: Creating SSH key...
	I0819 04:16:14.394899   16322 main.go:141] libmachine: Creating Disk image...
	I0819 04:16:14.394907   16322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:16:14.395929   16322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2
	I0819 04:16:14.405600   16322 main.go:141] libmachine: STDOUT: 
	I0819 04:16:14.405620   16322 main.go:141] libmachine: STDERR: 
	I0819 04:16:14.405669   16322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2 +20000M
	I0819 04:16:14.413557   16322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:16:14.413572   16322 main.go:141] libmachine: STDERR: 
	I0819 04:16:14.413584   16322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2
	I0819 04:16:14.413588   16322 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:16:14.413616   16322 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:16:14.413652   16322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:88:64:13:26:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2
	I0819 04:16:14.415253   16322 main.go:141] libmachine: STDOUT: 
	I0819 04:16:14.415268   16322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:16:14.415300   16322 client.go:171] duration metric: took 1.186164416s to LocalClient.Create
	I0819 04:16:16.417484   16322 start.go:128] duration metric: took 3.214120792s to createHost
	I0819 04:16:16.417563   16322 start.go:83] releasing machines lock for "addons-939000", held for 3.21426525s
	W0819 04:16:16.417631   16322 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:16:16.433920   16322 out.go:177] * Deleting "addons-939000" in qemu2 ...
	W0819 04:16:16.466722   16322 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:16:16.466750   16322 start.go:729] Will try again in 5 seconds ...
	I0819 04:16:21.468933   16322 start.go:360] acquireMachinesLock for addons-939000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:16:21.469458   16322 start.go:364] duration metric: took 396.375µs to acquireMachinesLock for "addons-939000"
	I0819 04:16:21.469596   16322 start.go:93] Provisioning new machine with config: &{Name:addons-939000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-939000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:16:21.469896   16322 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:16:21.485685   16322 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 04:16:21.537257   16322 start.go:159] libmachine.API.Create for "addons-939000" (driver="qemu2")
	I0819 04:16:21.537299   16322 client.go:168] LocalClient.Create starting
	I0819 04:16:21.537416   16322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:16:21.537473   16322 main.go:141] libmachine: Decoding PEM data...
	I0819 04:16:21.537504   16322 main.go:141] libmachine: Parsing certificate...
	I0819 04:16:21.537594   16322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:16:21.537639   16322 main.go:141] libmachine: Decoding PEM data...
	I0819 04:16:21.537658   16322 main.go:141] libmachine: Parsing certificate...
	I0819 04:16:21.538177   16322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:16:21.706395   16322 main.go:141] libmachine: Creating SSH key...
	I0819 04:16:21.803481   16322 main.go:141] libmachine: Creating Disk image...
	I0819 04:16:21.803487   16322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:16:21.803724   16322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2
	I0819 04:16:21.812996   16322 main.go:141] libmachine: STDOUT: 
	I0819 04:16:21.813014   16322 main.go:141] libmachine: STDERR: 
	I0819 04:16:21.813063   16322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2 +20000M
	I0819 04:16:21.820927   16322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:16:21.820950   16322 main.go:141] libmachine: STDERR: 
	I0819 04:16:21.820970   16322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2
	I0819 04:16:21.820975   16322 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:16:21.820986   16322 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:16:21.821022   16322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d4:f9:e9:4c:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/addons-939000/disk.qcow2
	I0819 04:16:21.822642   16322 main.go:141] libmachine: STDOUT: 
	I0819 04:16:21.822659   16322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:16:21.822677   16322 client.go:171] duration metric: took 285.378292ms to LocalClient.Create
	I0819 04:16:23.824891   16322 start.go:128] duration metric: took 2.354982875s to createHost
	I0819 04:16:23.824952   16322 start.go:83] releasing machines lock for "addons-939000", held for 2.355510666s
	W0819 04:16:23.825326   16322 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:16:23.833780   16322 out.go:201] 
	W0819 04:16:23.839850   16322 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:16:23.839899   16322 out.go:270] * 
	* 
	W0819 04:16:23.842558   16322 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:16:23.850739   16322 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-939000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.78s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-427000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-427000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.834001625s)

                                                
                                                
-- stdout --
	* [cert-options-427000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-427000" primary control-plane node in "cert-options-427000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-427000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-427000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-427000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-427000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-427000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.293042ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-427000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-427000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-427000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-427000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-427000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-427000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.240667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-427000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-427000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-427000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-427000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-427000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-19 04:28:16.38077 -0700 PDT m=+744.696283543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-427000 -n cert-options-427000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-427000 -n cert-options-427000: exit status 7 (30.859584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-427000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-427000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-427000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.740801791s)

                                                
                                                
-- stdout --
	* [cert-expiration-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-979000" primary control-plane node in "cert-expiration-979000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-979000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-979000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.189574875s)

                                                
                                                
-- stdout --
	* [cert-expiration-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-979000" primary control-plane node in "cert-expiration-979000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-979000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-979000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-979000" primary control-plane node in "cert-expiration-979000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-979000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-979000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-19 04:31:16.367184 -0700 PDT m=+924.686783043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-979000 -n cert-expiration-979000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-979000 -n cert-expiration-979000: exit status 7 (66.947333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-979000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-979000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-979000
--- FAIL: TestCertExpiration (195.08s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.882086625s)

                                                
                                                
-- stdout --
	* [docker-flags-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-007000" primary control-plane node in "docker-flags-007000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-007000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:27:56.300650   17903 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:27:56.300788   17903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:56.300794   17903 out.go:358] Setting ErrFile to fd 2...
	I0819 04:27:56.300797   17903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:56.300943   17903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:27:56.302001   17903 out.go:352] Setting JSON to false
	I0819 04:27:56.318198   17903 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8844,"bootTime":1724058032,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:27:56.318285   17903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:27:56.324314   17903 out.go:177] * [docker-flags-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:27:56.332182   17903 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:27:56.332202   17903 notify.go:220] Checking for updates...
	I0819 04:27:56.340164   17903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:27:56.343146   17903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:27:56.347149   17903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:27:56.350047   17903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:27:56.353082   17903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:27:56.356528   17903 config.go:182] Loaded profile config "force-systemd-flag-788000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:56.356593   17903 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:56.356648   17903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:27:56.361085   17903 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:27:56.368070   17903 start.go:297] selected driver: qemu2
	I0819 04:27:56.368077   17903 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:27:56.368083   17903 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:27:56.370480   17903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:27:56.373101   17903 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:27:56.376190   17903 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0819 04:27:56.376213   17903 cni.go:84] Creating CNI manager for ""
	I0819 04:27:56.376230   17903 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:27:56.376234   17903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:27:56.376274   17903 start.go:340] cluster config:
	{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:27:56.380023   17903 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:56.388091   17903 out.go:177] * Starting "docker-flags-007000" primary control-plane node in "docker-flags-007000" cluster
	I0819 04:27:56.392154   17903 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:27:56.392171   17903 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:27:56.392185   17903 cache.go:56] Caching tarball of preloaded images
	I0819 04:27:56.392273   17903 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:27:56.392279   17903 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:27:56.392366   17903 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/docker-flags-007000/config.json ...
	I0819 04:27:56.392381   17903 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/docker-flags-007000/config.json: {Name:mk97445ba9a8b69260e8e14898fa42d283d82676 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:27:56.392626   17903 start.go:360] acquireMachinesLock for docker-flags-007000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:56.392667   17903 start.go:364] duration metric: took 32.458µs to acquireMachinesLock for "docker-flags-007000"
	I0819 04:27:56.392682   17903 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:56.392721   17903 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:56.400130   17903 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:56.418257   17903 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="qemu2")
	I0819 04:27:56.418283   17903 client.go:168] LocalClient.Create starting
	I0819 04:27:56.418351   17903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:56.418381   17903 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:56.418395   17903 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:56.418433   17903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:56.418458   17903 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:56.418464   17903 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:56.418868   17903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:56.589318   17903 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:56.701173   17903 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:56.701178   17903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:56.701384   17903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2
	I0819 04:27:56.710540   17903 main.go:141] libmachine: STDOUT: 
	I0819 04:27:56.710562   17903 main.go:141] libmachine: STDERR: 
	I0819 04:27:56.710638   17903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2 +20000M
	I0819 04:27:56.718681   17903 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:56.718696   17903 main.go:141] libmachine: STDERR: 
	I0819 04:27:56.718707   17903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2
	I0819 04:27:56.718713   17903 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:56.718724   17903 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:56.718748   17903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:b3:b7:55:ea:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2
	I0819 04:27:56.720324   17903 main.go:141] libmachine: STDOUT: 
	I0819 04:27:56.720340   17903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:56.720358   17903 client.go:171] duration metric: took 302.076125ms to LocalClient.Create
	I0819 04:27:58.722476   17903 start.go:128] duration metric: took 2.329790542s to createHost
	I0819 04:27:58.722572   17903 start.go:83] releasing machines lock for "docker-flags-007000", held for 2.329916166s
	W0819 04:27:58.722611   17903 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:58.743482   17903 out.go:177] * Deleting "docker-flags-007000" in qemu2 ...
	W0819 04:27:58.765165   17903 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:58.765182   17903 start.go:729] Will try again in 5 seconds ...
	I0819 04:28:03.767344   17903 start.go:360] acquireMachinesLock for docker-flags-007000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:28:03.803378   17903 start.go:364] duration metric: took 35.922875ms to acquireMachinesLock for "docker-flags-007000"
	I0819 04:28:03.803514   17903 start.go:93] Provisioning new machine with config: &{Name:docker-flags-007000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-007000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:28:03.803719   17903 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:28:03.813166   17903 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:28:03.862436   17903 start.go:159] libmachine.API.Create for "docker-flags-007000" (driver="qemu2")
	I0819 04:28:03.862495   17903 client.go:168] LocalClient.Create starting
	I0819 04:28:03.862616   17903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:28:03.862678   17903 main.go:141] libmachine: Decoding PEM data...
	I0819 04:28:03.862697   17903 main.go:141] libmachine: Parsing certificate...
	I0819 04:28:03.862765   17903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:28:03.862811   17903 main.go:141] libmachine: Decoding PEM data...
	I0819 04:28:03.862824   17903 main.go:141] libmachine: Parsing certificate...
	I0819 04:28:03.863424   17903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:28:04.021901   17903 main.go:141] libmachine: Creating SSH key...
	I0819 04:28:04.081059   17903 main.go:141] libmachine: Creating Disk image...
	I0819 04:28:04.081064   17903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:28:04.081312   17903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2
	I0819 04:28:04.090693   17903 main.go:141] libmachine: STDOUT: 
	I0819 04:28:04.090710   17903 main.go:141] libmachine: STDERR: 
	I0819 04:28:04.090750   17903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2 +20000M
	I0819 04:28:04.098671   17903 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:28:04.098688   17903 main.go:141] libmachine: STDERR: 
	I0819 04:28:04.098697   17903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2
	I0819 04:28:04.098702   17903 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:28:04.098711   17903 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:28:04.098742   17903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:b8:f9:fd:7c:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/docker-flags-007000/disk.qcow2
	I0819 04:28:04.100444   17903 main.go:141] libmachine: STDOUT: 
	I0819 04:28:04.100458   17903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:28:04.100478   17903 client.go:171] duration metric: took 237.981625ms to LocalClient.Create
	I0819 04:28:06.102652   17903 start.go:128] duration metric: took 2.2989085s to createHost
	I0819 04:28:06.102719   17903 start.go:83] releasing machines lock for "docker-flags-007000", held for 2.299352959s
	W0819 04:28:06.103131   17903 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:28:06.122773   17903 out.go:201] 
	W0819 04:28:06.127822   17903 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:28:06.127847   17903 out.go:270] * 
	* 
	W0819 04:28:06.130537   17903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:28:06.140734   17903 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-007000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.0435ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-007000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-007000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-007000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-007000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-007000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.688791ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-007000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-007000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-007000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-007000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-007000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-19 04:28:06.281265 -0700 PDT m=+734.596550084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-007000 -n docker-flags-007000: exit status 7 (30.177583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-007000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-007000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-007000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-788000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-788000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.090411875s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-788000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-788000" primary control-plane node in "force-systemd-flag-788000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-788000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:27:51.176053   17882 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:27:51.176183   17882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:51.176186   17882 out.go:358] Setting ErrFile to fd 2...
	I0819 04:27:51.176189   17882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:51.176318   17882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:27:51.177364   17882 out.go:352] Setting JSON to false
	I0819 04:27:51.193466   17882 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8839,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:27:51.193533   17882 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:27:51.199392   17882 out.go:177] * [force-systemd-flag-788000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:27:51.207290   17882 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:27:51.207344   17882 notify.go:220] Checking for updates...
	I0819 04:27:51.216213   17882 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:27:51.220207   17882 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:27:51.224217   17882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:27:51.227268   17882 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:27:51.230222   17882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:27:51.233598   17882 config.go:182] Loaded profile config "force-systemd-env-510000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:51.233669   17882 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:51.233735   17882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:27:51.238343   17882 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:27:51.245294   17882 start.go:297] selected driver: qemu2
	I0819 04:27:51.245306   17882 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:27:51.245314   17882 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:27:51.247673   17882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:27:51.251283   17882 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:27:51.254296   17882 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:27:51.254332   17882 cni.go:84] Creating CNI manager for ""
	I0819 04:27:51.254341   17882 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:27:51.254346   17882 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:27:51.254382   17882 start.go:340] cluster config:
	{Name:force-systemd-flag-788000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-788000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:27:51.258256   17882 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:51.266271   17882 out.go:177] * Starting "force-systemd-flag-788000" primary control-plane node in "force-systemd-flag-788000" cluster
	I0819 04:27:51.270242   17882 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:27:51.270259   17882 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:27:51.270271   17882 cache.go:56] Caching tarball of preloaded images
	I0819 04:27:51.270333   17882 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:27:51.270338   17882 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:27:51.270403   17882 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/force-systemd-flag-788000/config.json ...
	I0819 04:27:51.270426   17882 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/force-systemd-flag-788000/config.json: {Name:mkcc9e9bf9a3e8ba0774aa18a1df92c9e27ba899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:27:51.270777   17882 start.go:360] acquireMachinesLock for force-systemd-flag-788000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:51.270816   17882 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "force-systemd-flag-788000"
	I0819 04:27:51.270830   17882 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-788000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-788000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:51.270864   17882 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:51.274222   17882 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:51.292512   17882 start.go:159] libmachine.API.Create for "force-systemd-flag-788000" (driver="qemu2")
	I0819 04:27:51.292538   17882 client.go:168] LocalClient.Create starting
	I0819 04:27:51.292600   17882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:51.292635   17882 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:51.292644   17882 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:51.292686   17882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:51.292713   17882 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:51.292723   17882 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:51.293113   17882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:51.442300   17882 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:51.606486   17882 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:51.606492   17882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:51.606735   17882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2
	I0819 04:27:51.616341   17882 main.go:141] libmachine: STDOUT: 
	I0819 04:27:51.616362   17882 main.go:141] libmachine: STDERR: 
	I0819 04:27:51.616418   17882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2 +20000M
	I0819 04:27:51.624396   17882 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:51.624414   17882 main.go:141] libmachine: STDERR: 
	I0819 04:27:51.624431   17882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2
	I0819 04:27:51.624436   17882 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:51.624450   17882 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:51.624484   17882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:33:30:71:75:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2
	I0819 04:27:51.626139   17882 main.go:141] libmachine: STDOUT: 
	I0819 04:27:51.626154   17882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:51.626173   17882 client.go:171] duration metric: took 333.637042ms to LocalClient.Create
	I0819 04:27:53.628309   17882 start.go:128] duration metric: took 2.357480958s to createHost
	I0819 04:27:53.628385   17882 start.go:83] releasing machines lock for "force-systemd-flag-788000", held for 2.357611s
	W0819 04:27:53.628470   17882 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:53.648466   17882 out.go:177] * Deleting "force-systemd-flag-788000" in qemu2 ...
	W0819 04:27:53.670044   17882 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:53.670065   17882 start.go:729] Will try again in 5 seconds ...
	I0819 04:27:58.672225   17882 start.go:360] acquireMachinesLock for force-systemd-flag-788000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:58.722666   17882 start.go:364] duration metric: took 50.340625ms to acquireMachinesLock for "force-systemd-flag-788000"
	I0819 04:27:58.722851   17882 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-788000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-788000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:58.723070   17882 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:58.734088   17882 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:58.782148   17882 start.go:159] libmachine.API.Create for "force-systemd-flag-788000" (driver="qemu2")
	I0819 04:27:58.782199   17882 client.go:168] LocalClient.Create starting
	I0819 04:27:58.782325   17882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:58.782388   17882 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:58.782404   17882 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:58.782461   17882 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:58.782503   17882 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:58.782517   17882 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:58.783067   17882 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:58.992085   17882 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:59.154969   17882 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:59.154975   17882 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:59.155228   17882 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2
	I0819 04:27:59.164663   17882 main.go:141] libmachine: STDOUT: 
	I0819 04:27:59.164682   17882 main.go:141] libmachine: STDERR: 
	I0819 04:27:59.164727   17882 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2 +20000M
	I0819 04:27:59.172600   17882 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:59.172615   17882 main.go:141] libmachine: STDERR: 
	I0819 04:27:59.172629   17882 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2
	I0819 04:27:59.172638   17882 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:59.172648   17882 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:59.172677   17882 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f9:a5:d9:0e:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-flag-788000/disk.qcow2
	I0819 04:27:59.174408   17882 main.go:141] libmachine: STDOUT: 
	I0819 04:27:59.174423   17882 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:59.174440   17882 client.go:171] duration metric: took 392.24275ms to LocalClient.Create
	I0819 04:28:01.176565   17882 start.go:128] duration metric: took 2.453510083s to createHost
	I0819 04:28:01.176867   17882 start.go:83] releasing machines lock for "force-systemd-flag-788000", held for 2.454233333s
	W0819 04:28:01.177150   17882 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-788000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-788000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:28:01.203108   17882 out.go:201] 
	W0819 04:28:01.211168   17882 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:28:01.211219   17882 out.go:270] * 
	* 
	W0819 04:28:01.213895   17882 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:28:01.223019   17882 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-788000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-788000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-788000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (83.299625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-788000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-788000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-788000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-19 04:28:01.32451 -0700 PDT m=+729.639682084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-788000 -n force-systemd-flag-788000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-788000 -n force-systemd-flag-788000: exit status 7 (35.993375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-788000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-788000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-788000
--- FAIL: TestForceSystemdFlag (10.29s)

                                                
                                    
x
+
TestForceSystemdEnv (11.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-510000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-510000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.561615625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-510000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-510000" primary control-plane node in "force-systemd-env-510000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-510000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:27:44.553310   17850 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:27:44.553667   17850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:44.553674   17850 out.go:358] Setting ErrFile to fd 2...
	I0819 04:27:44.553676   17850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:44.553856   17850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:27:44.555055   17850 out.go:352] Setting JSON to false
	I0819 04:27:44.571414   17850 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8832,"bootTime":1724058032,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:27:44.571484   17850 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:27:44.578654   17850 out.go:177] * [force-systemd-env-510000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:27:44.587629   17850 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:27:44.587679   17850 notify.go:220] Checking for updates...
	I0819 04:27:44.594600   17850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:27:44.597554   17850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:27:44.600593   17850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:27:44.603627   17850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:27:44.606582   17850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0819 04:27:44.610031   17850 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:44.610077   17850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:27:44.614580   17850 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:27:44.621593   17850 start.go:297] selected driver: qemu2
	I0819 04:27:44.621602   17850 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:27:44.621609   17850 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:27:44.624042   17850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:27:44.627602   17850 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:27:44.629198   17850 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:27:44.629241   17850 cni.go:84] Creating CNI manager for ""
	I0819 04:27:44.629249   17850 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:27:44.629253   17850 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:27:44.629290   17850 start.go:340] cluster config:
	{Name:force-systemd-env-510000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:27:44.632977   17850 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:44.640617   17850 out.go:177] * Starting "force-systemd-env-510000" primary control-plane node in "force-systemd-env-510000" cluster
	I0819 04:27:44.644575   17850 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:27:44.644599   17850 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:27:44.644607   17850 cache.go:56] Caching tarball of preloaded images
	I0819 04:27:44.644661   17850 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:27:44.644667   17850 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:27:44.644727   17850 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/force-systemd-env-510000/config.json ...
	I0819 04:27:44.644740   17850 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/force-systemd-env-510000/config.json: {Name:mkf99417b30fed220916fa104ca8bc90d9690834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:27:44.644974   17850 start.go:360] acquireMachinesLock for force-systemd-env-510000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:44.645011   17850 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "force-systemd-env-510000"
	I0819 04:27:44.645025   17850 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-510000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:44.645064   17850 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:44.653571   17850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:44.672154   17850 start.go:159] libmachine.API.Create for "force-systemd-env-510000" (driver="qemu2")
	I0819 04:27:44.672191   17850 client.go:168] LocalClient.Create starting
	I0819 04:27:44.672254   17850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:44.672290   17850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:44.672301   17850 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:44.672345   17850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:44.672369   17850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:44.672376   17850 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:44.672767   17850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:44.822951   17850 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:44.897227   17850 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:44.897235   17850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:44.897456   17850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2
	I0819 04:27:44.906418   17850 main.go:141] libmachine: STDOUT: 
	I0819 04:27:44.906439   17850 main.go:141] libmachine: STDERR: 
	I0819 04:27:44.906482   17850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2 +20000M
	I0819 04:27:44.914402   17850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:44.914418   17850 main.go:141] libmachine: STDERR: 
	I0819 04:27:44.914434   17850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2
	I0819 04:27:44.914441   17850 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:44.914454   17850 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:44.914478   17850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a9:70:69:48:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2
	I0819 04:27:44.916070   17850 main.go:141] libmachine: STDOUT: 
	I0819 04:27:44.916088   17850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:44.916114   17850 client.go:171] duration metric: took 243.915583ms to LocalClient.Create
	I0819 04:27:46.918172   17850 start.go:128] duration metric: took 2.273149875s to createHost
	I0819 04:27:46.918197   17850 start.go:83] releasing machines lock for "force-systemd-env-510000", held for 2.27323325s
	W0819 04:27:46.918211   17850 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:46.925182   17850 out.go:177] * Deleting "force-systemd-env-510000" in qemu2 ...
	W0819 04:27:46.934733   17850 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:46.934740   17850 start.go:729] Will try again in 5 seconds ...
	I0819 04:27:51.936803   17850 start.go:360] acquireMachinesLock for force-systemd-env-510000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:53.628533   17850 start.go:364] duration metric: took 1.6916435s to acquireMachinesLock for "force-systemd-env-510000"
	I0819 04:27:53.628699   17850 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-510000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-510000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:53.628992   17850 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:53.640468   17850 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 04:27:53.689676   17850 start.go:159] libmachine.API.Create for "force-systemd-env-510000" (driver="qemu2")
	I0819 04:27:53.689732   17850 client.go:168] LocalClient.Create starting
	I0819 04:27:53.689855   17850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:53.689918   17850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:53.689935   17850 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:53.689998   17850 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:53.690042   17850 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:53.690054   17850 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:53.690631   17850 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:53.863927   17850 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:54.013089   17850 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:54.013095   17850 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:54.013288   17850 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2
	I0819 04:27:54.022717   17850 main.go:141] libmachine: STDOUT: 
	I0819 04:27:54.022736   17850 main.go:141] libmachine: STDERR: 
	I0819 04:27:54.022787   17850 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2 +20000M
	I0819 04:27:54.030634   17850 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:54.030652   17850 main.go:141] libmachine: STDERR: 
	I0819 04:27:54.030670   17850 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2
	I0819 04:27:54.030675   17850 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:54.030685   17850 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:54.030727   17850 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:63:37:16:93:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/force-systemd-env-510000/disk.qcow2
	I0819 04:27:54.032356   17850 main.go:141] libmachine: STDOUT: 
	I0819 04:27:54.032370   17850 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:54.032383   17850 client.go:171] duration metric: took 342.652458ms to LocalClient.Create
	I0819 04:27:56.034522   17850 start.go:128] duration metric: took 2.405556792s to createHost
	I0819 04:27:56.034582   17850 start.go:83] releasing machines lock for "force-systemd-env-510000", held for 2.406012209s
	W0819 04:27:56.034933   17850 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-510000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-510000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:56.051594   17850 out.go:201] 
	W0819 04:27:56.058606   17850 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:27:56.058647   17850 out.go:270] * 
	* 
	W0819 04:27:56.061166   17850 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:27:56.070426   17850 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-510000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-510000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-510000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.327292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-510000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-510000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-510000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-19 04:27:56.164326 -0700 PDT m=+724.479381251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-510000 -n force-systemd-env-510000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-510000 -n force-systemd-env-510000: exit status 7 (35.329916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-510000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-510000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-510000
--- FAIL: TestForceSystemdEnv (11.75s)

                                                
                                    
x
+
TestErrorSpam/setup (9.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-373000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-373000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 --driver=qemu2 : exit status 80 (9.840366s)

                                                
                                                
-- stdout --
	* [nospam-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-373000" primary control-plane node in "nospam-373000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-373000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-373000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-373000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19479
- KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-373000" primary control-plane node in "nospam-373000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-373000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.84s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-916000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-916000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.924604959s)

                                                
                                                
-- stdout --
	* [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-916000" primary control-plane node in "functional-916000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-916000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52994 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52994 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52994 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-916000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19479
- KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-916000" primary control-plane node in "functional-916000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-916000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52994 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52994 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52994 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (72.014916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-916000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-916000 --alsologtostderr -v=8: exit status 80 (5.190657125s)

                                                
                                                
-- stdout --
	* [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-916000" primary control-plane node in "functional-916000" cluster
	* Restarting existing qemu2 VM for "functional-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:16:54.129728   16496 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:16:54.129866   16496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:16:54.129869   16496 out.go:358] Setting ErrFile to fd 2...
	I0819 04:16:54.129872   16496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:16:54.130006   16496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:16:54.131009   16496 out.go:352] Setting JSON to false
	I0819 04:16:54.147113   16496 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8182,"bootTime":1724058032,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:16:54.147192   16496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:16:54.152116   16496 out.go:177] * [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:16:54.159058   16496 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:16:54.159092   16496 notify.go:220] Checking for updates...
	I0819 04:16:54.166008   16496 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:16:54.170038   16496 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:16:54.173051   16496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:16:54.176077   16496 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:16:54.179053   16496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:16:54.182361   16496 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:16:54.182426   16496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:16:54.187055   16496 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:16:54.194004   16496 start.go:297] selected driver: qemu2
	I0819 04:16:54.194011   16496 start.go:901] validating driver "qemu2" against &{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:16:54.194052   16496 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:16:54.196355   16496 cni.go:84] Creating CNI manager for ""
	I0819 04:16:54.196374   16496 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:16:54.196437   16496 start.go:340] cluster config:
	{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:16:54.200054   16496 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:16:54.208035   16496 out.go:177] * Starting "functional-916000" primary control-plane node in "functional-916000" cluster
	I0819 04:16:54.212040   16496 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:16:54.212061   16496 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:16:54.212071   16496 cache.go:56] Caching tarball of preloaded images
	I0819 04:16:54.212143   16496 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:16:54.212148   16496 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:16:54.212200   16496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/functional-916000/config.json ...
	I0819 04:16:54.212669   16496 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:16:54.212697   16496 start.go:364] duration metric: took 22.583µs to acquireMachinesLock for "functional-916000"
	I0819 04:16:54.212707   16496 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:16:54.212713   16496 fix.go:54] fixHost starting: 
	I0819 04:16:54.212830   16496 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
	W0819 04:16:54.212838   16496 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:16:54.220054   16496 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
	I0819 04:16:54.223856   16496 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:16:54.223889   16496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
	I0819 04:16:54.225958   16496 main.go:141] libmachine: STDOUT: 
	I0819 04:16:54.225987   16496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:16:54.226019   16496 fix.go:56] duration metric: took 13.306958ms for fixHost
	I0819 04:16:54.226023   16496 start.go:83] releasing machines lock for "functional-916000", held for 13.32175ms
	W0819 04:16:54.226029   16496 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:16:54.226067   16496 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:16:54.226072   16496 start.go:729] Will try again in 5 seconds ...
	I0819 04:16:59.228138   16496 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:16:59.228532   16496 start.go:364] duration metric: took 306.958µs to acquireMachinesLock for "functional-916000"
	I0819 04:16:59.228666   16496 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:16:59.228687   16496 fix.go:54] fixHost starting: 
	I0819 04:16:59.229411   16496 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
	W0819 04:16:59.229441   16496 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:16:59.237841   16496 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
	I0819 04:16:59.241850   16496 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:16:59.241997   16496 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
	I0819 04:16:59.251446   16496 main.go:141] libmachine: STDOUT: 
	I0819 04:16:59.251512   16496 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:16:59.251595   16496 fix.go:56] duration metric: took 22.90575ms for fixHost
	I0819 04:16:59.251612   16496 start.go:83] releasing machines lock for "functional-916000", held for 23.060667ms
	W0819 04:16:59.251829   16496 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:16:59.259815   16496 out.go:201] 
	W0819 04:16:59.263914   16496 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:16:59.263933   16496 out.go:270] * 
	* 
	W0819 04:16:59.266113   16496 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:16:59.275779   16496 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-916000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.19234675s for "functional-916000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (67.505625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.185125ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-916000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (31.072375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-916000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-916000 get po -A: exit status 1 (26.156916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-916000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-916000\n"*: args "kubectl --context functional-916000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-916000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (30.465167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl images: exit status 83 (40.848459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.843083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-916000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.842042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.899625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-916000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 kubectl -- --context functional-916000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 kubectl -- --context functional-916000 get pods: exit status 1 (746.452958ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-916000
	* no server found for cluster "functional-916000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-916000 kubectl -- --context functional-916000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (32.575208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-916000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-916000 get pods: exit status 1 (1.016092542s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-916000
	* no server found for cluster "functional-916000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-916000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (29.952417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-916000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-916000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.181481542s)

                                                
                                                
-- stdout --
	* [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-916000" primary control-plane node in "functional-916000" cluster
	* Restarting existing qemu2 VM for "functional-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-916000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.182111625s for "functional-916000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (70.548166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-916000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-916000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.310209ms)

                                                
                                                
** stderr ** 
	error: context "functional-916000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-916000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (30.685959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 logs: exit status 83 (81.271709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
	|         | -p download-only-648000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| delete  | -p download-only-648000                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| start   | -o=json --download-only                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | -p download-only-956000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| delete  | -p download-only-956000                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| delete  | -p download-only-648000                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| delete  | -p download-only-956000                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| start   | --download-only -p                                                       | binary-mirror-577000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | binary-mirror-577000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52958                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-577000                                                  | binary-mirror-577000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| addons  | enable dashboard -p                                                      | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | addons-939000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | addons-939000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-939000 --wait=true                                             | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-939000                                                         | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| start   | -p nospam-373000 -n=1 --memory=2250 --wait=false                         | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-373000                                                         | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-916000                              |                      |         |         |                     |                     |
	| cache   | functional-916000 cache delete                                           | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-916000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	| ssh     | functional-916000 ssh sudo                                               | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-916000                                                        | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-916000 ssh                                                    | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-916000 cache reload                                           | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	| ssh     | functional-916000 ssh                                                    | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-916000 kubectl --                                             | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | --context functional-916000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 04:17:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 04:17:04.449632   16577 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:04.449752   16577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:04.449753   16577 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:04.449755   16577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:04.449867   16577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:17:04.450884   16577 out.go:352] Setting JSON to false
	I0819 04:17:04.466754   16577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8192,"bootTime":1724058032,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:17:04.466812   16577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:17:04.471335   16577 out.go:177] * [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:17:04.476243   16577 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:17:04.476282   16577 notify.go:220] Checking for updates...
	I0819 04:17:04.485275   16577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:17:04.489255   16577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:17:04.492244   16577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:17:04.495351   16577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:17:04.498253   16577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:17:04.501587   16577 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:17:04.501641   16577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:17:04.506288   16577 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:17:04.513331   16577 start.go:297] selected driver: qemu2
	I0819 04:17:04.513336   16577 start.go:901] validating driver "qemu2" against &{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:17:04.513408   16577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:17:04.515748   16577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:17:04.515769   16577 cni.go:84] Creating CNI manager for ""
	I0819 04:17:04.515783   16577 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:17:04.515828   16577 start.go:340] cluster config:
	{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:17:04.519429   16577 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:17:04.527231   16577 out.go:177] * Starting "functional-916000" primary control-plane node in "functional-916000" cluster
	I0819 04:17:04.531283   16577 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:17:04.531299   16577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:17:04.531307   16577 cache.go:56] Caching tarball of preloaded images
	I0819 04:17:04.531377   16577 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:17:04.531381   16577 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:17:04.531440   16577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/functional-916000/config.json ...
	I0819 04:17:04.531891   16577 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:17:04.531925   16577 start.go:364] duration metric: took 29µs to acquireMachinesLock for "functional-916000"
	I0819 04:17:04.531933   16577 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:17:04.531939   16577 fix.go:54] fixHost starting: 
	I0819 04:17:04.532065   16577 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
	W0819 04:17:04.532071   16577 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:17:04.539253   16577 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
	I0819 04:17:04.543099   16577 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:17:04.543138   16577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
	I0819 04:17:04.545219   16577 main.go:141] libmachine: STDOUT: 
	I0819 04:17:04.545237   16577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:17:04.545266   16577 fix.go:56] duration metric: took 13.328042ms for fixHost
	I0819 04:17:04.545269   16577 start.go:83] releasing machines lock for "functional-916000", held for 13.342ms
	W0819 04:17:04.545275   16577 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:17:04.545300   16577 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:17:04.545304   16577 start.go:729] Will try again in 5 seconds ...
	I0819 04:17:09.547358   16577 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:17:09.547704   16577 start.go:364] duration metric: took 289.167µs to acquireMachinesLock for "functional-916000"
	I0819 04:17:09.547849   16577 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:17:09.547863   16577 fix.go:54] fixHost starting: 
	I0819 04:17:09.548634   16577 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
	W0819 04:17:09.548654   16577 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:17:09.553013   16577 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
	I0819 04:17:09.557057   16577 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:17:09.557365   16577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
	I0819 04:17:09.566359   16577 main.go:141] libmachine: STDOUT: 
	I0819 04:17:09.566404   16577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:17:09.566493   16577 fix.go:56] duration metric: took 18.634875ms for fixHost
	I0819 04:17:09.566508   16577 start.go:83] releasing machines lock for "functional-916000", held for 18.791292ms
	W0819 04:17:09.566710   16577 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:17:09.575012   16577 out.go:201] 
	W0819 04:17:09.579238   16577 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:17:09.579280   16577 out.go:270] * 
	W0819 04:17:09.582814   16577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:17:09.592016   16577 out.go:201] 
	
	
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-916000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
|         | -p download-only-648000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-648000                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | -p download-only-956000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-956000                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-648000                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-956000                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-577000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | binary-mirror-577000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52958                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-577000                                                  | binary-mirror-577000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| start   | -p addons-939000 --wait=true                                             | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-939000                                                         | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | -p nospam-373000 -n=1 --memory=2250 --wait=false                         | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-373000                                                         | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | minikube-local-cache-test:functional-916000                              |                      |         |         |                     |                     |
| cache   | functional-916000 cache delete                                           | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | minikube-local-cache-test:functional-916000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
| ssh     | functional-916000 ssh sudo                                               | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-916000                                                        | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-916000 ssh                                                    | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-916000 cache reload                                           | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
| ssh     | functional-916000 ssh                                                    | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-916000 kubectl --                                             | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | --context functional-916000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/19 04:17:04
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0819 04:17:04.449632   16577 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:04.449752   16577 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:04.449753   16577 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:04.449755   16577 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:04.449867   16577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:04.450884   16577 out.go:352] Setting JSON to false
I0819 04:17:04.466754   16577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8192,"bootTime":1724058032,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0819 04:17:04.466812   16577 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0819 04:17:04.471335   16577 out.go:177] * [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0819 04:17:04.476243   16577 out.go:177]   - MINIKUBE_LOCATION=19479
I0819 04:17:04.476282   16577 notify.go:220] Checking for updates...
I0819 04:17:04.485275   16577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
I0819 04:17:04.489255   16577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0819 04:17:04.492244   16577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0819 04:17:04.495351   16577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
I0819 04:17:04.498253   16577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0819 04:17:04.501587   16577 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:04.501641   16577 driver.go:392] Setting default libvirt URI to qemu:///system
I0819 04:17:04.506288   16577 out.go:177] * Using the qemu2 driver based on existing profile
I0819 04:17:04.513331   16577 start.go:297] selected driver: qemu2
I0819 04:17:04.513336   16577 start.go:901] validating driver "qemu2" against &{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 04:17:04.513408   16577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0819 04:17:04.515748   16577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0819 04:17:04.515769   16577 cni.go:84] Creating CNI manager for ""
I0819 04:17:04.515783   16577 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0819 04:17:04.515828   16577 start.go:340] cluster config:
{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 04:17:04.519429   16577 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0819 04:17:04.527231   16577 out.go:177] * Starting "functional-916000" primary control-plane node in "functional-916000" cluster
I0819 04:17:04.531283   16577 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 04:17:04.531299   16577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 04:17:04.531307   16577 cache.go:56] Caching tarball of preloaded images
I0819 04:17:04.531377   16577 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 04:17:04.531381   16577 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 04:17:04.531440   16577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/functional-916000/config.json ...
I0819 04:17:04.531891   16577 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 04:17:04.531925   16577 start.go:364] duration metric: took 29µs to acquireMachinesLock for "functional-916000"
I0819 04:17:04.531933   16577 start.go:96] Skipping create...Using existing machine configuration
I0819 04:17:04.531939   16577 fix.go:54] fixHost starting: 
I0819 04:17:04.532065   16577 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
W0819 04:17:04.532071   16577 fix.go:138] unexpected machine state, will restart: <nil>
I0819 04:17:04.539253   16577 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
I0819 04:17:04.543099   16577 qemu.go:418] Using hvf for hardware acceleration
I0819 04:17:04.543138   16577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
I0819 04:17:04.545219   16577 main.go:141] libmachine: STDOUT: 
I0819 04:17:04.545237   16577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 04:17:04.545266   16577 fix.go:56] duration metric: took 13.328042ms for fixHost
I0819 04:17:04.545269   16577 start.go:83] releasing machines lock for "functional-916000", held for 13.342ms
W0819 04:17:04.545275   16577 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 04:17:04.545300   16577 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 04:17:04.545304   16577 start.go:729] Will try again in 5 seconds ...
I0819 04:17:09.547358   16577 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 04:17:09.547704   16577 start.go:364] duration metric: took 289.167µs to acquireMachinesLock for "functional-916000"
I0819 04:17:09.547849   16577 start.go:96] Skipping create...Using existing machine configuration
I0819 04:17:09.547863   16577 fix.go:54] fixHost starting: 
I0819 04:17:09.548634   16577 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
W0819 04:17:09.548654   16577 fix.go:138] unexpected machine state, will restart: <nil>
I0819 04:17:09.553013   16577 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
I0819 04:17:09.557057   16577 qemu.go:418] Using hvf for hardware acceleration
I0819 04:17:09.557365   16577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
I0819 04:17:09.566359   16577 main.go:141] libmachine: STDOUT: 
I0819 04:17:09.566404   16577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 04:17:09.566493   16577 fix.go:56] duration metric: took 18.634875ms for fixHost
I0819 04:17:09.566508   16577 start.go:83] releasing machines lock for "functional-916000", held for 18.791292ms
W0819 04:17:09.566710   16577 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 04:17:09.575012   16577 out.go:201] 
W0819 04:17:09.579238   16577 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 04:17:09.579280   16577 out.go:270] * 
W0819 04:17:09.582814   16577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 04:17:09.592016   16577 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd380788315/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
|         | -p download-only-648000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-648000                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | -p download-only-956000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-956000                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-648000                                                  | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| delete  | -p download-only-956000                                                  | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-577000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | binary-mirror-577000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52958                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-577000                                                  | binary-mirror-577000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | addons-939000                                                            |                      |         |         |                     |                     |
| start   | -p addons-939000 --wait=true                                             | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-939000                                                         | addons-939000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | -p nospam-373000 -n=1 --memory=2250 --wait=false                         | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-373000 --log_dir                                                  | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-373000                                                         | nospam-373000        | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-916000 cache add                                              | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | minikube-local-cache-test:functional-916000                              |                      |         |         |                     |                     |
| cache   | functional-916000 cache delete                                           | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | minikube-local-cache-test:functional-916000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
| ssh     | functional-916000 ssh sudo                                               | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-916000                                                        | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-916000 ssh                                                    | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-916000 cache reload                                           | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
| ssh     | functional-916000 ssh                                                    | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT | 19 Aug 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-916000 kubectl --                                             | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | --context functional-916000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-916000                                                     | functional-916000    | jenkins | v1.33.1 | 19 Aug 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/19 04:17:04
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0819 04:17:04.449632   16577 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:04.449752   16577 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:04.449753   16577 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:04.449755   16577 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:04.449867   16577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:04.450884   16577 out.go:352] Setting JSON to false
I0819 04:17:04.466754   16577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8192,"bootTime":1724058032,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0819 04:17:04.466812   16577 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0819 04:17:04.471335   16577 out.go:177] * [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0819 04:17:04.476243   16577 out.go:177]   - MINIKUBE_LOCATION=19479
I0819 04:17:04.476282   16577 notify.go:220] Checking for updates...
I0819 04:17:04.485275   16577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
I0819 04:17:04.489255   16577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0819 04:17:04.492244   16577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0819 04:17:04.495351   16577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
I0819 04:17:04.498253   16577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0819 04:17:04.501587   16577 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:04.501641   16577 driver.go:392] Setting default libvirt URI to qemu:///system
I0819 04:17:04.506288   16577 out.go:177] * Using the qemu2 driver based on existing profile
I0819 04:17:04.513331   16577 start.go:297] selected driver: qemu2
I0819 04:17:04.513336   16577 start.go:901] validating driver "qemu2" against &{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 04:17:04.513408   16577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0819 04:17:04.515748   16577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0819 04:17:04.515769   16577 cni.go:84] Creating CNI manager for ""
I0819 04:17:04.515783   16577 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0819 04:17:04.515828   16577 start.go:340] cluster config:
{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 04:17:04.519429   16577 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0819 04:17:04.527231   16577 out.go:177] * Starting "functional-916000" primary control-plane node in "functional-916000" cluster
I0819 04:17:04.531283   16577 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 04:17:04.531299   16577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 04:17:04.531307   16577 cache.go:56] Caching tarball of preloaded images
I0819 04:17:04.531377   16577 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 04:17:04.531381   16577 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 04:17:04.531440   16577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/functional-916000/config.json ...
I0819 04:17:04.531891   16577 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 04:17:04.531925   16577 start.go:364] duration metric: took 29µs to acquireMachinesLock for "functional-916000"
I0819 04:17:04.531933   16577 start.go:96] Skipping create...Using existing machine configuration
I0819 04:17:04.531939   16577 fix.go:54] fixHost starting: 
I0819 04:17:04.532065   16577 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
W0819 04:17:04.532071   16577 fix.go:138] unexpected machine state, will restart: <nil>
I0819 04:17:04.539253   16577 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
I0819 04:17:04.543099   16577 qemu.go:418] Using hvf for hardware acceleration
I0819 04:17:04.543138   16577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
I0819 04:17:04.545219   16577 main.go:141] libmachine: STDOUT: 
I0819 04:17:04.545237   16577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 04:17:04.545266   16577 fix.go:56] duration metric: took 13.328042ms for fixHost
I0819 04:17:04.545269   16577 start.go:83] releasing machines lock for "functional-916000", held for 13.342ms
W0819 04:17:04.545275   16577 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 04:17:04.545300   16577 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 04:17:04.545304   16577 start.go:729] Will try again in 5 seconds ...
I0819 04:17:09.547358   16577 start.go:360] acquireMachinesLock for functional-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 04:17:09.547704   16577 start.go:364] duration metric: took 289.167µs to acquireMachinesLock for "functional-916000"
I0819 04:17:09.547849   16577 start.go:96] Skipping create...Using existing machine configuration
I0819 04:17:09.547863   16577 fix.go:54] fixHost starting: 
I0819 04:17:09.548634   16577 fix.go:112] recreateIfNeeded on functional-916000: state=Stopped err=<nil>
W0819 04:17:09.548654   16577 fix.go:138] unexpected machine state, will restart: <nil>
I0819 04:17:09.553013   16577 out.go:177] * Restarting existing qemu2 VM for "functional-916000" ...
I0819 04:17:09.557057   16577 qemu.go:418] Using hvf for hardware acceleration
I0819 04:17:09.557365   16577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:e7:fe:ee:76:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/functional-916000/disk.qcow2
I0819 04:17:09.566359   16577 main.go:141] libmachine: STDOUT: 
I0819 04:17:09.566404   16577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 04:17:09.566493   16577 fix.go:56] duration metric: took 18.634875ms for fixHost
I0819 04:17:09.566508   16577 start.go:83] releasing machines lock for "functional-916000", held for 18.791292ms
W0819 04:17:09.566710   16577 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 04:17:09.575012   16577 out.go:201] 
W0819 04:17:09.579238   16577 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 04:17:09.579280   16577 out.go:270] * 
W0819 04:17:09.582814   16577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 04:17:09.592016   16577 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-916000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-916000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.992958ms)

                                                
                                                
** stderr ** 
	error: context "functional-916000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-916000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-916000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-916000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-916000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-916000 --alsologtostderr -v=1] stderr:
I0819 04:17:49.628082   16907 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:49.628513   16907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:49.628518   16907 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:49.628520   16907 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:49.628676   16907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:49.628885   16907 mustload.go:65] Loading cluster: functional-916000
I0819 04:17:49.629098   16907 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:49.632905   16907 out.go:177] * The control-plane node functional-916000 host is not running: state=Stopped
I0819 04:17:49.636921   16907 out.go:177]   To start a cluster, run: "minikube start -p functional-916000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (41.378459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 status: exit status 7 (29.742291ms)

                                                
                                                
-- stdout --
	functional-916000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-916000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.9925ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-916000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 status -o json: exit status 7 (28.661791ms)

                                                
                                                
-- stdout --
	{"Name":"functional-916000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-916000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (29.613542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-916000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-916000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.284417ms)

                                                
                                                
** stderr ** 
	error: context "functional-916000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-916000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-916000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-916000 describe po hello-node-connect: exit status 1 (25.978542ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-916000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-916000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-916000 logs -l app=hello-node-connect: exit status 1 (26.300166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-916000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-916000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-916000 describe svc hello-node-connect: exit status 1 (25.484375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-916000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (29.302584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-916000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (31.031125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "echo hello": exit status 83 (44.4685ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n"*. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "cat /etc/hostname": exit status 83 (44.701625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-916000"- but got *"* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n"*. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (32.339833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.131791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-916000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.096584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-916000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-916000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cp functional-916000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd525837181/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 cp functional-916000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd525837181/001/cp-test.txt: exit status 83 (41.439084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-916000 cp functional-916000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd525837181/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.060333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd525837181/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.669209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-916000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (41.792875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-916000 ssh -n functional-916000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-916000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-916000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16240/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/test/nested/copy/16240/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/test/nested/copy/16240/hosts": exit status 83 (39.772625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/test/nested/copy/16240/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-916000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-916000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (30.34325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16240.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/16240.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/16240.pem": exit status 83 (41.4455ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/16240.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo cat /etc/ssl/certs/16240.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/16240.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-916000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-916000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16240.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /usr/share/ca-certificates/16240.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /usr/share/ca-certificates/16240.pem": exit status 83 (39.574709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/16240.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo cat /usr/share/ca-certificates/16240.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/16240.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-916000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-916000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (46.813792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-916000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-916000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/162402.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/162402.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/162402.pem": exit status 83 (42.533083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/162402.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo cat /etc/ssl/certs/162402.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/162402.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-916000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-916000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/162402.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /usr/share/ca-certificates/162402.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /usr/share/ca-certificates/162402.pem": exit status 83 (39.809416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/162402.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo cat /usr/share/ca-certificates/162402.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/162402.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-916000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-916000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (43.690292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-916000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-916000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (29.838333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-916000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-916000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.070584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-916000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-916000 -n functional-916000: exit status 7 (30.245167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo systemctl is-active crio": exit status 83 (40.584458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 version -o=json --components: exit status 83 (42.863542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-916000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-916000 image ls --format short --alsologtostderr:
I0819 04:17:50.033758   16922 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:50.033918   16922 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.033922   16922 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:50.033924   16922 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.034047   16922 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:50.034462   16922 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:50.034523   16922 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-916000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-916000 image ls --format table --alsologtostderr:
I0819 04:17:50.142872   16928 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:50.143015   16928 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.143018   16928 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:50.143020   16928 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.143146   16928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:50.143540   16928 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:50.143598   16928 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-916000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-916000 image ls --format json --alsologtostderr:
I0819 04:17:50.106791   16926 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:50.106938   16926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.106942   16926 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:50.106944   16926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.107099   16926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:50.107525   16926 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:50.107585   16926 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-916000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-916000 image ls --format yaml --alsologtostderr:
I0819 04:17:50.070049   16924 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:50.070189   16924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.070192   16924 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:50.070194   16924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.070337   16924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:50.070789   16924 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:50.070849   16924 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh pgrep buildkitd: exit status 83 (41.7615ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image build -t localhost/my-image:functional-916000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-916000 image build -t localhost/my-image:functional-916000 testdata/build --alsologtostderr:
I0819 04:17:50.220359   16932 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:50.220928   16932 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.220939   16932 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:50.220941   16932 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:50.221132   16932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:50.221577   16932 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:50.222039   16932 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:50.222305   16932 build_images.go:133] succeeded building to: 
I0819 04:17:50.222311   16932 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls
functional_test.go:446: expected "localhost/my-image:functional-916000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-916000 docker-env) && out/minikube-darwin-arm64 status -p functional-916000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-916000 docker-env) && out/minikube-darwin-arm64 status -p functional-916000": exit status 1 (43.9225ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2: exit status 83 (42.620625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:17:49.904214   16916 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:49.905143   16916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.905147   16916 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:49.905149   16916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.905315   16916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:17:49.905538   16916 mustload.go:65] Loading cluster: functional-916000
	I0819 04:17:49.905733   16916 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:17:49.909917   16916 out.go:177] * The control-plane node functional-916000 host is not running: state=Stopped
	I0819 04:17:49.913718   16916 out.go:177]   To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2: exit status 83 (42.445458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:17:49.991628   16920 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:49.991776   16920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.991780   16920 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:49.991783   16920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.991906   16920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:17:49.992111   16920 mustload.go:65] Loading cluster: functional-916000
	I0819 04:17:49.992295   16920 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:17:49.996858   16920 out.go:177] * The control-plane node functional-916000 host is not running: state=Stopped
	I0819 04:17:49.999899   16920 out.go:177]   To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2: exit status 83 (42.784542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:17:49.947794   16918 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:49.947954   16918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.947957   16918 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:49.947959   16918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.948090   16918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:17:49.948308   16918 mustload.go:65] Loading cluster: functional-916000
	I0819 04:17:49.948506   16918 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:17:49.952868   16918 out.go:177] * The control-plane node functional-916000 host is not running: state=Stopped
	I0819 04:17:49.956859   16918 out.go:177]   To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-916000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-916000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-916000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.367583ms)

                                                
                                                
** stderr ** 
	error: context "functional-916000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-916000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 service list: exit status 83 (47.509667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-916000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 service list -o json: exit status 83 (43.953333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-916000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 service --namespace=default --https --url hello-node: exit status 83 (42.77275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-916000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 service hello-node --url --format={{.IP}}: exit status 83 (41.881416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-916000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 service hello-node --url: exit status 83 (41.769125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-916000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test.go:1569: failed to parse "* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"": parse "* The control-plane node functional-916000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-916000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0819 04:17:11.404018   16701 out.go:345] Setting OutFile to fd 1 ...
I0819 04:17:11.404178   16701 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:11.404182   16701 out.go:358] Setting ErrFile to fd 2...
I0819 04:17:11.404184   16701 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:17:11.404331   16701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:17:11.404550   16701 mustload.go:65] Loading cluster: functional-916000
I0819 04:17:11.404760   16701 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:17:11.409478   16701 out.go:177] * The control-plane node functional-916000 host is not running: state=Stopped
I0819 04:17:11.416468   16701 out.go:177]   To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
stdout: * The control-plane node functional-916000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-916000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 16700: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-916000": client config: context "functional-916000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-916000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-916000 get svc nginx-svc: exit status 1 (69.89ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-916000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-916000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image load --daemon kicbase/echo-server:functional-916000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-916000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image load --daemon kicbase/echo-server:functional-916000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-916000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-916000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image load --daemon kicbase/echo-server:functional-916000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-916000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image save kicbase/echo-server:functional-916000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-916000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030183792s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-534000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-534000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.859548917s)

                                                
                                                
-- stdout --
	* [ha-534000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-534000" primary control-plane node in "ha-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:20:05.681852   16982 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:20:05.681975   16982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:05.681978   16982 out.go:358] Setting ErrFile to fd 2...
	I0819 04:20:05.681980   16982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:20:05.682115   16982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:20:05.683212   16982 out.go:352] Setting JSON to false
	I0819 04:20:05.699424   16982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8373,"bootTime":1724058032,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:20:05.699490   16982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:20:05.705582   16982 out.go:177] * [ha-534000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:20:05.713615   16982 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:20:05.713654   16982 notify.go:220] Checking for updates...
	I0819 04:20:05.721534   16982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:20:05.725455   16982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:20:05.728486   16982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:20:05.731501   16982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:20:05.734484   16982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:20:05.737742   16982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:20:05.741508   16982 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:20:05.748517   16982 start.go:297] selected driver: qemu2
	I0819 04:20:05.748524   16982 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:20:05.748529   16982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:20:05.750972   16982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:20:05.753616   16982 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:20:05.755004   16982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:20:05.755047   16982 cni.go:84] Creating CNI manager for ""
	I0819 04:20:05.755053   16982 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 04:20:05.755063   16982 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 04:20:05.755097   16982 start.go:340] cluster config:
	{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:20:05.758991   16982 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:20:05.766522   16982 out.go:177] * Starting "ha-534000" primary control-plane node in "ha-534000" cluster
	I0819 04:20:05.770540   16982 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:20:05.770558   16982 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:20:05.770569   16982 cache.go:56] Caching tarball of preloaded images
	I0819 04:20:05.770632   16982 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:20:05.770638   16982 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:20:05.770866   16982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/ha-534000/config.json ...
	I0819 04:20:05.770878   16982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/ha-534000/config.json: {Name:mkf9f3885c7e7076c9f612fb57a7fb44263a6c77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:20:05.771210   16982 start.go:360] acquireMachinesLock for ha-534000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:20:05.771246   16982 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "ha-534000"
	I0819 04:20:05.771260   16982 start.go:93] Provisioning new machine with config: &{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:20:05.771291   16982 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:20:05.780470   16982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:20:05.798621   16982 start.go:159] libmachine.API.Create for "ha-534000" (driver="qemu2")
	I0819 04:20:05.798653   16982 client.go:168] LocalClient.Create starting
	I0819 04:20:05.798722   16982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:20:05.798755   16982 main.go:141] libmachine: Decoding PEM data...
	I0819 04:20:05.798764   16982 main.go:141] libmachine: Parsing certificate...
	I0819 04:20:05.798806   16982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:20:05.798830   16982 main.go:141] libmachine: Decoding PEM data...
	I0819 04:20:05.798843   16982 main.go:141] libmachine: Parsing certificate...
	I0819 04:20:05.799206   16982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:20:05.953680   16982 main.go:141] libmachine: Creating SSH key...
	I0819 04:20:05.988403   16982 main.go:141] libmachine: Creating Disk image...
	I0819 04:20:05.988409   16982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:20:05.988638   16982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:20:05.998226   16982 main.go:141] libmachine: STDOUT: 
	I0819 04:20:05.998243   16982 main.go:141] libmachine: STDERR: 
	I0819 04:20:05.998289   16982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2 +20000M
	I0819 04:20:06.006188   16982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:20:06.006208   16982 main.go:141] libmachine: STDERR: 
	I0819 04:20:06.006226   16982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:20:06.006233   16982 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:20:06.006242   16982 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:20:06.006271   16982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:cb:96:77:8a:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:20:06.007935   16982 main.go:141] libmachine: STDOUT: 
	I0819 04:20:06.007952   16982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:20:06.007969   16982 client.go:171] duration metric: took 209.315334ms to LocalClient.Create
	I0819 04:20:08.010116   16982 start.go:128] duration metric: took 2.238843541s to createHost
	I0819 04:20:08.010187   16982 start.go:83] releasing machines lock for "ha-534000", held for 2.23897075s
	W0819 04:20:08.010306   16982 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:20:08.024336   16982 out.go:177] * Deleting "ha-534000" in qemu2 ...
	W0819 04:20:08.051056   16982 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:20:08.051083   16982 start.go:729] Will try again in 5 seconds ...
	I0819 04:20:13.053251   16982 start.go:360] acquireMachinesLock for ha-534000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:20:13.053681   16982 start.go:364] duration metric: took 342.958µs to acquireMachinesLock for "ha-534000"
	I0819 04:20:13.053798   16982 start.go:93] Provisioning new machine with config: &{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:20:13.054098   16982 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:20:13.070706   16982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:20:13.120782   16982 start.go:159] libmachine.API.Create for "ha-534000" (driver="qemu2")
	I0819 04:20:13.120830   16982 client.go:168] LocalClient.Create starting
	I0819 04:20:13.120936   16982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:20:13.120995   16982 main.go:141] libmachine: Decoding PEM data...
	I0819 04:20:13.121009   16982 main.go:141] libmachine: Parsing certificate...
	I0819 04:20:13.121074   16982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:20:13.121117   16982 main.go:141] libmachine: Decoding PEM data...
	I0819 04:20:13.121128   16982 main.go:141] libmachine: Parsing certificate...
	I0819 04:20:13.121645   16982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:20:13.284300   16982 main.go:141] libmachine: Creating SSH key...
	I0819 04:20:13.445569   16982 main.go:141] libmachine: Creating Disk image...
	I0819 04:20:13.445575   16982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:20:13.445824   16982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:20:13.455346   16982 main.go:141] libmachine: STDOUT: 
	I0819 04:20:13.455379   16982 main.go:141] libmachine: STDERR: 
	I0819 04:20:13.455436   16982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2 +20000M
	I0819 04:20:13.463401   16982 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:20:13.463437   16982 main.go:141] libmachine: STDERR: 
	I0819 04:20:13.463452   16982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:20:13.463455   16982 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:20:13.463460   16982 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:20:13.463492   16982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7b:fb:61:d5:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:20:13.465130   16982 main.go:141] libmachine: STDOUT: 
	I0819 04:20:13.465146   16982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:20:13.465157   16982 client.go:171] duration metric: took 344.326666ms to LocalClient.Create
	I0819 04:20:15.467330   16982 start.go:128] duration metric: took 2.413222417s to createHost
	I0819 04:20:15.467431   16982 start.go:83] releasing machines lock for "ha-534000", held for 2.413769916s
	W0819 04:20:15.467892   16982 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:20:15.479532   16982 out.go:201] 
	W0819 04:20:15.485615   16982 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:20:15.485650   16982 out.go:270] * 
	* 
	W0819 04:20:15.488071   16982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:20:15.497575   16982 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-534000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (67.172959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (80.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.249ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-534000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- rollout status deployment/busybox: exit status 1 (57.819875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.325208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.55ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.546167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.033375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.485917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.356959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.3455ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.514083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.848291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.620709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.221333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.782917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.075917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.678209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.831917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (80.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-534000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.250625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-534000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.924708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-534000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-534000 -v=7 --alsologtostderr: exit status 83 (41.783125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-534000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-534000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:36.395727   17059 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:36.396293   17059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.396297   17059 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:36.396299   17059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.396447   17059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:36.396704   17059 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:36.396900   17059 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:36.401686   17059 out.go:177] * The control-plane node ha-534000 host is not running: state=Stopped
	I0819 04:21:36.405456   17059 out.go:177]   To start a cluster, run: "minikube start -p ha-534000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-534000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.036792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-534000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-534000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.482541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-534000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-534000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-534000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (31.067959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-534000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-534000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.515417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status --output json -v=7 --alsologtostderr: exit status 7 (30.320292ms)

                                                
                                                
-- stdout --
	{"Name":"ha-534000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:36.603922   17071 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:36.604074   17071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.604077   17071 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:36.604080   17071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.604218   17071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:36.604345   17071 out.go:352] Setting JSON to true
	I0819 04:21:36.604359   17071 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:36.604417   17071 notify.go:220] Checking for updates...
	I0819 04:21:36.604555   17071 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:36.604560   17071 status.go:255] checking status of ha-534000 ...
	I0819 04:21:36.604759   17071 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:36.604763   17071 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:36.604765   17071 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-534000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.899916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:36.666045   17075 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:36.666453   17075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.666457   17075 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:36.666460   17075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.666626   17075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:36.666871   17075 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:36.667085   17075 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:36.671241   17075 out.go:201] 
	W0819 04:21:36.674218   17075 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0819 04:21:36.674223   17075 out.go:270] * 
	* 
	W0819 04:21:36.676401   17075 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:21:36.679174   17075 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-534000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (30.682416ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:36.713192   17077 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:36.713341   17077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.713344   17077 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:36.713346   17077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.713506   17077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:36.713624   17077 out.go:352] Setting JSON to false
	I0819 04:21:36.713638   17077 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:36.713685   17077 notify.go:220] Checking for updates...
	I0819 04:21:36.713866   17077 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:36.713871   17077 status.go:255] checking status of ha-534000 ...
	I0819 04:21:36.714104   17077 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:36.714108   17077 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:36.714110   17077 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (29.94375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-534000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.669625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.078125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:36.850605   17086 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:36.851353   17086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.851357   17086 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:36.851359   17086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.851508   17086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:36.851739   17086 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:36.851924   17086 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:36.855252   17086 out.go:201] 
	W0819 04:21:36.859234   17086 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0819 04:21:36.859240   17086 out.go:270] * 
	* 
	W0819 04:21:36.861419   17086 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:21:36.866076   17086 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0819 04:21:36.850605   17086 out.go:345] Setting OutFile to fd 1 ...
I0819 04:21:36.851353   17086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:21:36.851357   17086 out.go:358] Setting ErrFile to fd 2...
I0819 04:21:36.851359   17086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:21:36.851508   17086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:21:36.851739   17086 mustload.go:65] Loading cluster: ha-534000
I0819 04:21:36.851924   17086 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:21:36.855252   17086 out.go:201] 
W0819 04:21:36.859234   17086 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0819 04:21:36.859240   17086 out.go:270] * 
* 
W0819 04:21:36.861419   17086 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 04:21:36.866076   17086 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-534000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (30.419125ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:36.900038   17088 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:36.900217   17088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.900220   17088 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:36.900223   17088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:36.900366   17088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:36.900487   17088 out.go:352] Setting JSON to false
	I0819 04:21:36.900498   17088 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:36.900554   17088 notify.go:220] Checking for updates...
	I0819 04:21:36.900693   17088 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:36.900698   17088 status.go:255] checking status of ha-534000 ...
	I0819 04:21:36.900900   17088 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:36.900904   17088 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:36.900906   17088 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (73.881834ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:37.640007   17090 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:37.640201   17090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:37.640205   17090 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:37.640208   17090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:37.640375   17090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:37.640543   17090 out.go:352] Setting JSON to false
	I0819 04:21:37.640558   17090 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:37.640605   17090 notify.go:220] Checking for updates...
	I0819 04:21:37.640811   17090 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:37.640817   17090 status.go:255] checking status of ha-534000 ...
	I0819 04:21:37.641110   17090 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:37.641116   17090 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:37.641119   17090 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (73.692417ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:38.817671   17092 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:38.817889   17092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:38.817893   17092 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:38.817897   17092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:38.818078   17092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:38.818242   17092 out.go:352] Setting JSON to false
	I0819 04:21:38.818262   17092 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:38.818302   17092 notify.go:220] Checking for updates...
	I0819 04:21:38.818536   17092 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:38.818543   17092 status.go:255] checking status of ha-534000 ...
	I0819 04:21:38.818820   17092 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:38.818825   17092 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:38.818828   17092 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (74.574875ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:40.477373   17094 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:40.477566   17094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:40.477570   17094 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:40.477573   17094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:40.477755   17094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:40.477903   17094 out.go:352] Setting JSON to false
	I0819 04:21:40.477917   17094 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:40.477959   17094 notify.go:220] Checking for updates...
	I0819 04:21:40.478202   17094 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:40.478208   17094 status.go:255] checking status of ha-534000 ...
	I0819 04:21:40.478477   17094 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:40.478482   17094 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:40.478485   17094 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (75.958917ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:43.593882   17096 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:43.594089   17096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:43.594094   17096 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:43.594098   17096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:43.594282   17096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:43.594463   17096 out.go:352] Setting JSON to false
	I0819 04:21:43.594479   17096 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:43.594517   17096 notify.go:220] Checking for updates...
	I0819 04:21:43.594765   17096 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:43.594772   17096 status.go:255] checking status of ha-534000 ...
	I0819 04:21:43.595061   17096 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:43.595066   17096 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:43.595069   17096 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (72.821792ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:48.428679   17098 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:48.428908   17098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:48.428912   17098 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:48.428915   17098 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:48.429096   17098 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:48.429243   17098 out.go:352] Setting JSON to false
	I0819 04:21:48.429260   17098 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:48.429303   17098 notify.go:220] Checking for updates...
	I0819 04:21:48.429538   17098 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:48.429545   17098 status.go:255] checking status of ha-534000 ...
	I0819 04:21:48.429857   17098 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:48.429862   17098 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:48.429865   17098 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (76.150125ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:21:59.004759   17100 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:21:59.004942   17100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:59.004947   17100 out.go:358] Setting ErrFile to fd 2...
	I0819 04:21:59.004950   17100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:21:59.005138   17100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:21:59.005314   17100 out.go:352] Setting JSON to false
	I0819 04:21:59.005330   17100 mustload.go:65] Loading cluster: ha-534000
	I0819 04:21:59.005376   17100 notify.go:220] Checking for updates...
	I0819 04:21:59.005594   17100 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:21:59.005602   17100 status.go:255] checking status of ha-534000 ...
	I0819 04:21:59.005887   17100 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:21:59.005892   17100 status.go:343] host is not running, skipping remaining checks
	I0819 04:21:59.005895   17100 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (73.384375ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:08.030997   17104 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:08.031190   17104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:08.031194   17104 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:08.031197   17104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:08.031367   17104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:08.031529   17104 out.go:352] Setting JSON to false
	I0819 04:22:08.031544   17104 mustload.go:65] Loading cluster: ha-534000
	I0819 04:22:08.031579   17104 notify.go:220] Checking for updates...
	I0819 04:22:08.031801   17104 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:08.031808   17104 status.go:255] checking status of ha-534000 ...
	I0819 04:22:08.032111   17104 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:22:08.032116   17104 status.go:343] host is not running, skipping remaining checks
	I0819 04:22:08.032119   17104 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (76.178583ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:23.563684   17110 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:23.563896   17110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:23.563900   17110 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:23.563904   17110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:23.564068   17110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:23.564231   17110 out.go:352] Setting JSON to false
	I0819 04:22:23.564245   17110 mustload.go:65] Loading cluster: ha-534000
	I0819 04:22:23.564292   17110 notify.go:220] Checking for updates...
	I0819 04:22:23.564543   17110 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:23.564552   17110 status.go:255] checking status of ha-534000 ...
	I0819 04:22:23.564848   17110 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:22:23.564853   17110 status.go:343] host is not running, skipping remaining checks
	I0819 04:22:23.564856   17110 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (34.10125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (46.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-534000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-534000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (29.840291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-534000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-534000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-534000 -v=7 --alsologtostderr: (2.871674834s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-534000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-534000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229426875s)

                                                
                                                
-- stdout --
	* [ha-534000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-534000" primary control-plane node in "ha-534000" cluster
	* Restarting existing qemu2 VM for "ha-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:26.645970   17139 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:26.646201   17139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:26.646208   17139 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:26.646211   17139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:26.646507   17139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:26.648031   17139 out.go:352] Setting JSON to false
	I0819 04:22:26.667200   17139 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8514,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:22:26.667267   17139 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:22:26.671856   17139 out.go:177] * [ha-534000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:22:26.679017   17139 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:22:26.679044   17139 notify.go:220] Checking for updates...
	I0819 04:22:26.686979   17139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:22:26.691056   17139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:22:26.693891   17139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:22:26.697019   17139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:22:26.700003   17139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:22:26.703293   17139 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:26.703351   17139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:22:26.707928   17139 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:22:26.714951   17139 start.go:297] selected driver: qemu2
	I0819 04:22:26.714958   17139 start.go:901] validating driver "qemu2" against &{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:22:26.715013   17139 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:22:26.717337   17139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:22:26.717384   17139 cni.go:84] Creating CNI manager for ""
	I0819 04:22:26.717390   17139 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 04:22:26.717448   17139 start.go:340] cluster config:
	{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:22:26.721209   17139 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:22:26.729844   17139 out.go:177] * Starting "ha-534000" primary control-plane node in "ha-534000" cluster
	I0819 04:22:26.734004   17139 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:22:26.734018   17139 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:22:26.734027   17139 cache.go:56] Caching tarball of preloaded images
	I0819 04:22:26.734082   17139 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:22:26.734087   17139 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:22:26.734157   17139 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/ha-534000/config.json ...
	I0819 04:22:26.734568   17139 start.go:360] acquireMachinesLock for ha-534000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:22:26.734606   17139 start.go:364] duration metric: took 31.917µs to acquireMachinesLock for "ha-534000"
	I0819 04:22:26.734616   17139 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:22:26.734623   17139 fix.go:54] fixHost starting: 
	I0819 04:22:26.734750   17139 fix.go:112] recreateIfNeeded on ha-534000: state=Stopped err=<nil>
	W0819 04:22:26.734759   17139 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:22:26.742969   17139 out.go:177] * Restarting existing qemu2 VM for "ha-534000" ...
	I0819 04:22:26.746982   17139 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:22:26.747022   17139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7b:fb:61:d5:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:22:26.748965   17139 main.go:141] libmachine: STDOUT: 
	I0819 04:22:26.748985   17139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:22:26.749014   17139 fix.go:56] duration metric: took 14.391917ms for fixHost
	I0819 04:22:26.749020   17139 start.go:83] releasing machines lock for "ha-534000", held for 14.409875ms
	W0819 04:22:26.749025   17139 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:22:26.749075   17139 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:22:26.749080   17139 start.go:729] Will try again in 5 seconds ...
	I0819 04:22:31.751298   17139 start.go:360] acquireMachinesLock for ha-534000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:22:31.751685   17139 start.go:364] duration metric: took 281.208µs to acquireMachinesLock for "ha-534000"
	I0819 04:22:31.751797   17139 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:22:31.751816   17139 fix.go:54] fixHost starting: 
	I0819 04:22:31.752538   17139 fix.go:112] recreateIfNeeded on ha-534000: state=Stopped err=<nil>
	W0819 04:22:31.752564   17139 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:22:31.760574   17139 out.go:177] * Restarting existing qemu2 VM for "ha-534000" ...
	I0819 04:22:31.766627   17139 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:22:31.766985   17139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7b:fb:61:d5:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:22:31.774650   17139 main.go:141] libmachine: STDOUT: 
	I0819 04:22:31.774754   17139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:22:31.774865   17139 fix.go:56] duration metric: took 23.049292ms for fixHost
	I0819 04:22:31.774889   17139 start.go:83] releasing machines lock for "ha-534000", held for 23.181292ms
	W0819 04:22:31.775087   17139 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:22:31.783402   17139 out.go:201] 
	W0819 04:22:31.787401   17139 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:22:31.787423   17139 out.go:270] * 
	* 
	W0819 04:22:31.790057   17139 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:22:31.799353   17139 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-534000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-534000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (33.356625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 node delete m03 -v=7 --alsologtostderr: exit status 83 (43.459125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-534000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-534000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:31.922771   17151 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:31.923152   17151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:31.923155   17151 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:31.923157   17151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:31.923294   17151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:31.923534   17151 mustload.go:65] Loading cluster: ha-534000
	I0819 04:22:31.923724   17151 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:31.928525   17151 out.go:177] * The control-plane node ha-534000 host is not running: state=Stopped
	I0819 04:22:31.932318   17151 out.go:177]   To start a cluster, run: "minikube start -p ha-534000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-534000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (30.574458ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:31.966145   17153 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:31.966299   17153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:31.966302   17153 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:31.966304   17153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:31.966419   17153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:31.966547   17153 out.go:352] Setting JSON to false
	I0819 04:22:31.966558   17153 mustload.go:65] Loading cluster: ha-534000
	I0819 04:22:31.966611   17153 notify.go:220] Checking for updates...
	I0819 04:22:31.966742   17153 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:31.966748   17153 status.go:255] checking status of ha-534000 ...
	I0819 04:22:31.966957   17153 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:22:31.966961   17153 status.go:343] host is not running, skipping remaining checks
	I0819 04:22:31.966963   17153 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (31.197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-534000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (29.753042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-534000 stop -v=7 --alsologtostderr: (3.354491792s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr: exit status 7 (65.289917ms)

                                                
                                                
-- stdout --
	ha-534000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:35.493226   17182 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:35.493407   17182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:35.493412   17182 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:35.493414   17182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:35.493580   17182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:35.493746   17182 out.go:352] Setting JSON to false
	I0819 04:22:35.493760   17182 mustload.go:65] Loading cluster: ha-534000
	I0819 04:22:35.493795   17182 notify.go:220] Checking for updates...
	I0819 04:22:35.494018   17182 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:35.494025   17182 status.go:255] checking status of ha-534000 ...
	I0819 04:22:35.494308   17182 status.go:330] ha-534000 host status = "Stopped" (err=<nil>)
	I0819 04:22:35.494312   17182 status.go:343] host is not running, skipping remaining checks
	I0819 04:22:35.494315   17182 status.go:257] ha-534000 status: &{Name:ha-534000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-534000 status -v=7 --alsologtostderr": ha-534000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (33.146333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-534000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-534000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.18250525s)

                                                
                                                
-- stdout --
	* [ha-534000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-534000" primary control-plane node in "ha-534000" cluster
	* Restarting existing qemu2 VM for "ha-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:35.556920   17186 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:35.557052   17186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:35.557056   17186 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:35.557058   17186 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:35.557192   17186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:35.558195   17186 out.go:352] Setting JSON to false
	I0819 04:22:35.574182   17186 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8523,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:22:35.574250   17186 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:22:35.579295   17186 out.go:177] * [ha-534000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:22:35.587252   17186 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:22:35.587295   17186 notify.go:220] Checking for updates...
	I0819 04:22:35.593195   17186 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:22:35.596226   17186 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:22:35.599276   17186 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:22:35.602168   17186 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:22:35.605231   17186 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:22:35.608539   17186 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:35.608808   17186 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:22:35.613192   17186 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:22:35.620221   17186 start.go:297] selected driver: qemu2
	I0819 04:22:35.620228   17186 start.go:901] validating driver "qemu2" against &{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:22:35.620286   17186 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:22:35.622639   17186 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:22:35.622665   17186 cni.go:84] Creating CNI manager for ""
	I0819 04:22:35.622671   17186 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 04:22:35.622722   17186 start.go:340] cluster config:
	{Name:ha-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-534000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:22:35.626249   17186 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:22:35.633227   17186 out.go:177] * Starting "ha-534000" primary control-plane node in "ha-534000" cluster
	I0819 04:22:35.637045   17186 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:22:35.637062   17186 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:22:35.637075   17186 cache.go:56] Caching tarball of preloaded images
	I0819 04:22:35.637136   17186 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:22:35.637142   17186 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:22:35.637213   17186 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/ha-534000/config.json ...
	I0819 04:22:35.637628   17186 start.go:360] acquireMachinesLock for ha-534000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:22:35.637663   17186 start.go:364] duration metric: took 28µs to acquireMachinesLock for "ha-534000"
	I0819 04:22:35.637673   17186 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:22:35.637679   17186 fix.go:54] fixHost starting: 
	I0819 04:22:35.637793   17186 fix.go:112] recreateIfNeeded on ha-534000: state=Stopped err=<nil>
	W0819 04:22:35.637802   17186 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:22:35.646218   17186 out.go:177] * Restarting existing qemu2 VM for "ha-534000" ...
	I0819 04:22:35.650207   17186 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:22:35.650244   17186 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7b:fb:61:d5:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:22:35.652160   17186 main.go:141] libmachine: STDOUT: 
	I0819 04:22:35.652184   17186 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:22:35.652211   17186 fix.go:56] duration metric: took 14.532792ms for fixHost
	I0819 04:22:35.652215   17186 start.go:83] releasing machines lock for "ha-534000", held for 14.548125ms
	W0819 04:22:35.652222   17186 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:22:35.652257   17186 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:22:35.652261   17186 start.go:729] Will try again in 5 seconds ...
	I0819 04:22:40.654325   17186 start.go:360] acquireMachinesLock for ha-534000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:22:40.654771   17186 start.go:364] duration metric: took 309.125µs to acquireMachinesLock for "ha-534000"
	I0819 04:22:40.654920   17186 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:22:40.654942   17186 fix.go:54] fixHost starting: 
	I0819 04:22:40.655662   17186 fix.go:112] recreateIfNeeded on ha-534000: state=Stopped err=<nil>
	W0819 04:22:40.655689   17186 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:22:40.664172   17186 out.go:177] * Restarting existing qemu2 VM for "ha-534000" ...
	I0819 04:22:40.667104   17186 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:22:40.667329   17186 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:7b:fb:61:d5:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/ha-534000/disk.qcow2
	I0819 04:22:40.677046   17186 main.go:141] libmachine: STDOUT: 
	I0819 04:22:40.677122   17186 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:22:40.677265   17186 fix.go:56] duration metric: took 22.284459ms for fixHost
	I0819 04:22:40.677290   17186 start.go:83] releasing machines lock for "ha-534000", held for 22.486333ms
	W0819 04:22:40.677524   17186 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:22:40.682674   17186 out.go:201] 
	W0819 04:22:40.687222   17186 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:22:40.687257   17186 out.go:270] * 
	* 
	W0819 04:22:40.690083   17186 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:22:40.699233   17186 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-534000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (71.078708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-534000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.44325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-534000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-534000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.043417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-534000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-534000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:22:40.893581   17204 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:22:40.893727   17204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:40.893733   17204 out.go:358] Setting ErrFile to fd 2...
	I0819 04:22:40.893736   17204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:22:40.893860   17204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:22:40.894099   17204 mustload.go:65] Loading cluster: ha-534000
	I0819 04:22:40.894287   17204 config.go:182] Loaded profile config "ha-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:22:40.898631   17204 out.go:177] * The control-plane node ha-534000 host is not running: state=Stopped
	I0819 04:22:40.901565   17204 out.go:177]   To start a cluster, run: "minikube start -p ha-534000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-534000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.912917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-534000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-534000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-534000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-534000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-534000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-534000 -n ha-534000: exit status 7 (30.272334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-550000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-550000 --driver=qemu2 : exit status 80 (9.865050333s)

                                                
                                                
-- stdout --
	* [image-550000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-550000" primary control-plane node in "image-550000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-550000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-550000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-550000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-550000 -n image-550000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-550000 -n image-550000: exit status 7 (68.390792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-550000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-842000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-842000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.8578925s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"233f45b9-39e2-4b23-a615-e94ca0cee3dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-842000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff817799-286e-4ad3-b047-8e39c3ee6cb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"2544a0c4-9ef9-411a-94f5-96f228961651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig"}}
	{"specversion":"1.0","id":"4a0d498a-d377-4a55-9771-83a3c3a80b97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ba147b08-fe76-43db-921c-52839d33cef4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"532b0275-6a0a-44c5-a71e-6f0423fea5ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube"}}
	{"specversion":"1.0","id":"40b707ef-4990-407c-b733-8ef25ac1331b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3486e5dd-1639-47d6-ad6e-68357fee1954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c04a09e6-7047-4471-b5e7-9933cd0b080c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"18942cf3-09eb-493f-a48f-357d3c2421d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-842000\" primary control-plane node in \"json-output-842000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc1555a3-147f-499d-8fd6-4f55268ea1e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9a98b166-9306-4428-a743-d9e5d10bbe36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-842000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"12017c90-633e-4435-b47e-e09ca9ae7117","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"950545b5-18c6-4b8c-9218-b6d1b1cc7ec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3dc55d4d-cfc4-40b4-a1a6-bfa39e64e8d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-842000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5fbb54a7-25b8-41fe-b9df-5dd9536bbdf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"2145be35-1c6a-4a72-bcb8-d110f737839e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-842000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-842000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-842000 --output=json --user=testUser: exit status 83 (82.732167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"75b12ce4-9fa1-4914-b3c2-419fe95bb94d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-842000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9380d3c1-a8c6-46f5-ac51-4cd5d4c5fec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-842000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-842000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-842000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-842000 --output=json --user=testUser: exit status 83 (47.295042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-842000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-842000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-842000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-842000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-431000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-431000 --driver=qemu2 : exit status 80 (9.87394475s)

                                                
                                                
-- stdout --
	* [first-431000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-431000" primary control-plane node in "first-431000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-431000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-431000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-431000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 04:23:14.559756 -0700 PDT m=+442.845781751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-433000 -n second-433000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-433000 -n second-433000: exit status 85 (83.610917ms)

                                                
                                                
-- stdout --
	* Profile "second-433000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-433000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-433000" host is not running, skipping log retrieval (state="* Profile \"second-433000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-433000\"")
helpers_test.go:175: Cleaning up "second-433000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-433000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 04:23:14.755318 -0700 PDT m=+443.041347876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-431000 -n first-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-431000 -n first-431000: exit status 7 (30.0705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-431000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-431000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-431000
--- FAIL: TestMinikubeProfile (10.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-553000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-553000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.005810625s)

                                                
                                                
-- stdout --
	* [mount-start-1-553000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-553000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-553000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-553000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-553000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-553000 -n mount-start-1-553000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-553000 -n mount-start-1-553000: exit status 7 (64.767542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-553000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-746000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-746000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.814101583s)

                                                
                                                
-- stdout --
	* [multinode-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-746000" primary control-plane node in "multinode-746000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-746000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:23:25.144307   17349 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:23:25.144433   17349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:25.144436   17349 out.go:358] Setting ErrFile to fd 2...
	I0819 04:23:25.144438   17349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:23:25.144567   17349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:23:25.145617   17349 out.go:352] Setting JSON to false
	I0819 04:23:25.161633   17349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8573,"bootTime":1724058032,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:23:25.161712   17349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:23:25.166870   17349 out.go:177] * [multinode-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:23:25.174775   17349 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:23:25.174828   17349 notify.go:220] Checking for updates...
	I0819 04:23:25.182608   17349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:23:25.185796   17349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:23:25.188778   17349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:23:25.191787   17349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:23:25.194720   17349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:23:25.197992   17349 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:23:25.202754   17349 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:23:25.209770   17349 start.go:297] selected driver: qemu2
	I0819 04:23:25.209778   17349 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:23:25.209787   17349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:23:25.212002   17349 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:23:25.214790   17349 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:23:25.217799   17349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:23:25.217840   17349 cni.go:84] Creating CNI manager for ""
	I0819 04:23:25.217846   17349 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 04:23:25.217851   17349 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 04:23:25.217900   17349 start.go:340] cluster config:
	{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:23:25.221459   17349 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:23:25.228725   17349 out.go:177] * Starting "multinode-746000" primary control-plane node in "multinode-746000" cluster
	I0819 04:23:25.232734   17349 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:23:25.232751   17349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:23:25.232769   17349 cache.go:56] Caching tarball of preloaded images
	I0819 04:23:25.232852   17349 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:23:25.232860   17349 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:23:25.233107   17349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/multinode-746000/config.json ...
	I0819 04:23:25.233120   17349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/multinode-746000/config.json: {Name:mk219f90adcf3a522c227dc50940377fc6db199c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:23:25.233364   17349 start.go:360] acquireMachinesLock for multinode-746000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:23:25.233401   17349 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "multinode-746000"
	I0819 04:23:25.233414   17349 start.go:93] Provisioning new machine with config: &{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:23:25.233456   17349 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:23:25.241750   17349 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:23:25.260001   17349 start.go:159] libmachine.API.Create for "multinode-746000" (driver="qemu2")
	I0819 04:23:25.260035   17349 client.go:168] LocalClient.Create starting
	I0819 04:23:25.260108   17349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:23:25.260139   17349 main.go:141] libmachine: Decoding PEM data...
	I0819 04:23:25.260154   17349 main.go:141] libmachine: Parsing certificate...
	I0819 04:23:25.260193   17349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:23:25.260218   17349 main.go:141] libmachine: Decoding PEM data...
	I0819 04:23:25.260226   17349 main.go:141] libmachine: Parsing certificate...
	I0819 04:23:25.260669   17349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:23:25.412411   17349 main.go:141] libmachine: Creating SSH key...
	I0819 04:23:25.475859   17349 main.go:141] libmachine: Creating Disk image...
	I0819 04:23:25.475864   17349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:23:25.476085   17349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:23:25.485362   17349 main.go:141] libmachine: STDOUT: 
	I0819 04:23:25.485382   17349 main.go:141] libmachine: STDERR: 
	I0819 04:23:25.485420   17349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2 +20000M
	I0819 04:23:25.493261   17349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:23:25.493285   17349 main.go:141] libmachine: STDERR: 
	I0819 04:23:25.493303   17349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:23:25.493308   17349 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:23:25.493317   17349 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:23:25.493344   17349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:57:69:6e:d0:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:23:25.494935   17349 main.go:141] libmachine: STDOUT: 
	I0819 04:23:25.494956   17349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:23:25.494974   17349 client.go:171] duration metric: took 234.936333ms to LocalClient.Create
	I0819 04:23:27.497110   17349 start.go:128] duration metric: took 2.26367375s to createHost
	I0819 04:23:27.497185   17349 start.go:83] releasing machines lock for "multinode-746000", held for 2.26381525s
	W0819 04:23:27.497262   17349 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:23:27.508474   17349 out.go:177] * Deleting "multinode-746000" in qemu2 ...
	W0819 04:23:27.539607   17349 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:23:27.539635   17349 start.go:729] Will try again in 5 seconds ...
	I0819 04:23:32.541736   17349 start.go:360] acquireMachinesLock for multinode-746000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:23:32.542282   17349 start.go:364] duration metric: took 427.208µs to acquireMachinesLock for "multinode-746000"
	I0819 04:23:32.542442   17349 start.go:93] Provisioning new machine with config: &{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:23:32.542804   17349 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:23:32.559375   17349 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:23:32.607306   17349 start.go:159] libmachine.API.Create for "multinode-746000" (driver="qemu2")
	I0819 04:23:32.607359   17349 client.go:168] LocalClient.Create starting
	I0819 04:23:32.607473   17349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:23:32.607534   17349 main.go:141] libmachine: Decoding PEM data...
	I0819 04:23:32.607550   17349 main.go:141] libmachine: Parsing certificate...
	I0819 04:23:32.607609   17349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:23:32.607654   17349 main.go:141] libmachine: Decoding PEM data...
	I0819 04:23:32.607669   17349 main.go:141] libmachine: Parsing certificate...
	I0819 04:23:32.608156   17349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:23:32.771361   17349 main.go:141] libmachine: Creating SSH key...
	I0819 04:23:32.864090   17349 main.go:141] libmachine: Creating Disk image...
	I0819 04:23:32.864096   17349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:23:32.864301   17349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:23:32.873466   17349 main.go:141] libmachine: STDOUT: 
	I0819 04:23:32.873490   17349 main.go:141] libmachine: STDERR: 
	I0819 04:23:32.873550   17349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2 +20000M
	I0819 04:23:32.881404   17349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:23:32.881421   17349 main.go:141] libmachine: STDERR: 
	I0819 04:23:32.881429   17349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:23:32.881432   17349 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:23:32.881452   17349 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:23:32.881476   17349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b1:6f:6a:b7:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:23:32.883029   17349 main.go:141] libmachine: STDOUT: 
	I0819 04:23:32.883045   17349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:23:32.883057   17349 client.go:171] duration metric: took 275.695625ms to LocalClient.Create
	I0819 04:23:34.885200   17349 start.go:128] duration metric: took 2.342412166s to createHost
	I0819 04:23:34.885274   17349 start.go:83] releasing machines lock for "multinode-746000", held for 2.34298175s
	W0819 04:23:34.885731   17349 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-746000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-746000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:23:34.895310   17349 out.go:201] 
	W0819 04:23:34.903428   17349 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:23:34.903455   17349 out.go:270] * 
	* 
	W0819 04:23:34.906528   17349 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:23:34.915318   17349 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-746000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (66.712333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (110.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.412875ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-746000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- rollout status deployment/busybox: exit status 1 (56.730917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.945708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.054667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.100875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.912167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.606708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.469291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.388125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.10225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.213125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.274334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.225542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.759416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.301708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.916291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.087666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.767833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (110.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-746000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.835458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-746000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-746000 -v 3 --alsologtostderr: exit status 83 (42.594625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:25.236051   17446 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:25.236224   17446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.236229   17446 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:25.236231   17446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.236368   17446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:25.236607   17446 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:25.236813   17446 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:25.240907   17446 out.go:177] * The control-plane node multinode-746000 host is not running: state=Stopped
	I0819 04:25:25.244890   17446 out.go:177]   To start a cluster, run: "minikube start -p multinode-746000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-746000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (29.769458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-746000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-746000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.476125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-746000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-746000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-746000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.657292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-746000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-746000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-746000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-746000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (31.082125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status --output json --alsologtostderr: exit status 7 (30.457042ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-746000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:25.443084   17458 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:25.443241   17458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.443244   17458 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:25.443246   17458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.443374   17458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:25.443503   17458 out.go:352] Setting JSON to true
	I0819 04:25:25.443516   17458 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:25.443574   17458 notify.go:220] Checking for updates...
	I0819 04:25:25.443722   17458 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:25.443727   17458 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:25.443918   17458 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:25.443921   17458 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:25.443924   17458 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-746000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.523583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 node stop m03: exit status 85 (49.659125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-746000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status: exit status 7 (30.324291ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr: exit status 7 (30.554917ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:25.585000   17467 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:25.585150   17467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.585153   17467 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:25.585155   17467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.585284   17467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:25.585410   17467 out.go:352] Setting JSON to false
	I0819 04:25:25.585424   17467 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:25.585469   17467 notify.go:220] Checking for updates...
	I0819 04:25:25.585649   17467 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:25.585654   17467 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:25.585853   17467 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:25.585857   17467 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:25.585859   17467 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr": multinode-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.42ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.327333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:25.645594   17471 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:25.645952   17471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.645956   17471 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:25.645958   17471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.646124   17471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:25.646335   17471 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:25.646542   17471 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:25.650998   17471 out.go:201] 
	W0819 04:25:25.654872   17471 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0819 04:25:25.654876   17471 out.go:270] * 
	* 
	W0819 04:25:25.657086   17471 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:25:25.660857   17471 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0819 04:25:25.645594   17471 out.go:345] Setting OutFile to fd 1 ...
I0819 04:25:25.645952   17471 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:25:25.645956   17471 out.go:358] Setting ErrFile to fd 2...
I0819 04:25:25.645958   17471 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 04:25:25.646124   17471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
I0819 04:25:25.646335   17471 mustload.go:65] Loading cluster: multinode-746000
I0819 04:25:25.646542   17471 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 04:25:25.650998   17471 out.go:201] 
W0819 04:25:25.654872   17471 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0819 04:25:25.654876   17471 out.go:270] * 
* 
W0819 04:25:25.657086   17471 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 04:25:25.660857   17471 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-746000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (30.720959ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:25.694008   17473 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:25.694146   17473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.694149   17473 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:25.694152   17473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:25.694289   17473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:25.694411   17473 out.go:352] Setting JSON to false
	I0819 04:25:25.694422   17473 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:25.694477   17473 notify.go:220] Checking for updates...
	I0819 04:25:25.694623   17473 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:25.694628   17473 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:25.694830   17473 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:25.694834   17473 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:25.694836   17473 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (75.880167ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:26.421102   17476 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:26.421316   17476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:26.421321   17476 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:26.421324   17476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:26.421515   17476 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:26.421669   17476 out.go:352] Setting JSON to false
	I0819 04:25:26.421683   17476 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:26.421719   17476 notify.go:220] Checking for updates...
	I0819 04:25:26.421954   17476 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:26.421961   17476 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:26.422234   17476 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:26.422239   17476 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:26.422242   17476 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (74.987459ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:28.020690   17478 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:28.020896   17478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:28.020900   17478 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:28.020904   17478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:28.021099   17478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:28.021271   17478 out.go:352] Setting JSON to false
	I0819 04:25:28.021287   17478 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:28.021327   17478 notify.go:220] Checking for updates...
	I0819 04:25:28.021580   17478 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:28.021587   17478 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:28.021859   17478 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:28.021865   17478 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:28.021867   17478 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (74.301333ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:29.459180   17480 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:29.459393   17480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:29.459398   17480 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:29.459401   17480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:29.459569   17480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:29.459728   17480 out.go:352] Setting JSON to false
	I0819 04:25:29.459741   17480 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:29.459784   17480 notify.go:220] Checking for updates...
	I0819 04:25:29.459986   17480 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:29.459993   17480 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:29.460287   17480 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:29.460292   17480 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:29.460295   17480 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (74.939625ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:32.546644   17482 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:32.546823   17482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:32.546827   17482 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:32.546830   17482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:32.547024   17482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:32.547190   17482 out.go:352] Setting JSON to false
	I0819 04:25:32.547205   17482 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:32.547244   17482 notify.go:220] Checking for updates...
	I0819 04:25:32.547497   17482 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:32.547508   17482 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:32.547770   17482 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:32.547775   17482 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:32.547778   17482 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (74.459417ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:35.387778   17486 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:35.388205   17486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:35.388218   17486 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:35.388222   17486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:35.388475   17486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:35.388691   17486 out.go:352] Setting JSON to false
	I0819 04:25:35.388710   17486 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:35.388836   17486 notify.go:220] Checking for updates...
	I0819 04:25:35.389260   17486 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:35.389278   17486 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:35.389556   17486 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:35.389562   17486 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:35.389565   17486 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (78.21975ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:44.152583   17488 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:44.152773   17488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:44.152778   17488 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:44.152782   17488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:44.152948   17488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:44.153104   17488 out.go:352] Setting JSON to false
	I0819 04:25:44.153119   17488 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:44.153153   17488 notify.go:220] Checking for updates...
	I0819 04:25:44.153372   17488 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:44.153379   17488 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:44.153656   17488 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:44.153661   17488 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:44.153664   17488 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (75.45475ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:25:59.353125   17490 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:25:59.353319   17490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:59.353323   17490 out.go:358] Setting ErrFile to fd 2...
	I0819 04:25:59.353327   17490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:25:59.353508   17490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:25:59.353674   17490 out.go:352] Setting JSON to false
	I0819 04:25:59.353689   17490 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:25:59.353733   17490 notify.go:220] Checking for updates...
	I0819 04:25:59.353962   17490 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:25:59.353969   17490 status.go:255] checking status of multinode-746000 ...
	I0819 04:25:59.354253   17490 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:25:59.354258   17490 status.go:343] host is not running, skipping remaining checks
	I0819 04:25:59.354261   17490 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr: exit status 7 (76.068833ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:21.947039   17497 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:21.947224   17497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:21.947228   17497 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:21.947231   17497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:21.947395   17497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:26:21.947550   17497 out.go:352] Setting JSON to false
	I0819 04:26:21.947565   17497 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:26:21.947609   17497 notify.go:220] Checking for updates...
	I0819 04:26:21.947823   17497 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:21.947830   17497 status.go:255] checking status of multinode-746000 ...
	I0819 04:26:21.948104   17497 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:26:21.948109   17497 status.go:343] host is not running, skipping remaining checks
	I0819 04:26:21.948112   17497 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-746000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (32.855625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-746000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-746000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-746000: (3.705293833s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-746000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-746000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219818334s)

                                                
                                                
-- stdout --
	* [multinode-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-746000" primary control-plane node in "multinode-746000" cluster
	* Restarting existing qemu2 VM for "multinode-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:25.779044   17521 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:25.779488   17521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:25.779495   17521 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:25.779503   17521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:25.779693   17521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:26:25.781163   17521 out.go:352] Setting JSON to false
	I0819 04:26:25.800168   17521 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8753,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:26:25.800233   17521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:25.805146   17521 out.go:177] * [multinode-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:25.813067   17521 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:26:25.813104   17521 notify.go:220] Checking for updates...
	I0819 04:26:25.820148   17521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:26:25.823041   17521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:25.826135   17521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:25.829100   17521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:26:25.832081   17521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:25.835395   17521 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:25.835446   17521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:25.840102   17521 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:25.846994   17521 start.go:297] selected driver: qemu2
	I0819 04:26:25.847000   17521 start.go:901] validating driver "qemu2" against &{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:25.847049   17521 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:25.849343   17521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:25.849397   17521 cni.go:84] Creating CNI manager for ""
	I0819 04:26:25.849402   17521 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 04:26:25.849451   17521 start.go:340] cluster config:
	{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:25.853021   17521 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:25.861100   17521 out.go:177] * Starting "multinode-746000" primary control-plane node in "multinode-746000" cluster
	I0819 04:26:25.865139   17521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:25.865157   17521 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:25.865170   17521 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:25.865248   17521 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:25.865256   17521 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:25.865326   17521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/multinode-746000/config.json ...
	I0819 04:26:25.865762   17521 start.go:360] acquireMachinesLock for multinode-746000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:25.865797   17521 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "multinode-746000"
	I0819 04:26:25.865811   17521 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:25.865817   17521 fix.go:54] fixHost starting: 
	I0819 04:26:25.865941   17521 fix.go:112] recreateIfNeeded on multinode-746000: state=Stopped err=<nil>
	W0819 04:26:25.865949   17521 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:25.874071   17521 out.go:177] * Restarting existing qemu2 VM for "multinode-746000" ...
	I0819 04:26:25.878087   17521 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:25.878122   17521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b1:6f:6a:b7:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:26:25.880267   17521 main.go:141] libmachine: STDOUT: 
	I0819 04:26:25.880288   17521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:25.880330   17521 fix.go:56] duration metric: took 14.513458ms for fixHost
	I0819 04:26:25.880334   17521 start.go:83] releasing machines lock for "multinode-746000", held for 14.533125ms
	W0819 04:26:25.880341   17521 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:25.880379   17521 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:25.880384   17521 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:30.882471   17521 start.go:360] acquireMachinesLock for multinode-746000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:30.882899   17521 start.go:364] duration metric: took 285.333µs to acquireMachinesLock for "multinode-746000"
	I0819 04:26:30.883026   17521 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:30.883048   17521 fix.go:54] fixHost starting: 
	I0819 04:26:30.883749   17521 fix.go:112] recreateIfNeeded on multinode-746000: state=Stopped err=<nil>
	W0819 04:26:30.883776   17521 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:30.889249   17521 out.go:177] * Restarting existing qemu2 VM for "multinode-746000" ...
	I0819 04:26:30.893224   17521 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:30.893393   17521 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b1:6f:6a:b7:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:26:30.902347   17521 main.go:141] libmachine: STDOUT: 
	I0819 04:26:30.902407   17521 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:30.902486   17521 fix.go:56] duration metric: took 19.441458ms for fixHost
	I0819 04:26:30.902504   17521 start.go:83] releasing machines lock for "multinode-746000", held for 19.579667ms
	W0819 04:26:30.902676   17521 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:30.910231   17521 out.go:201] 
	W0819 04:26:30.914139   17521 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:30.914166   17521 out.go:270] * 
	* 
	W0819 04:26:30.916371   17521 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:30.924170   17521 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-746000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-746000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (33.424208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 node delete m03: exit status 83 (43.048333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-746000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-746000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr: exit status 7 (31.156375ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:31.112413   17537 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:31.112574   17537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:31.112577   17537 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:31.112579   17537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:31.112711   17537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:26:31.112826   17537 out.go:352] Setting JSON to false
	I0819 04:26:31.112837   17537 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:26:31.112889   17537 notify.go:220] Checking for updates...
	I0819 04:26:31.113017   17537 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:31.113021   17537 status.go:255] checking status of multinode-746000 ...
	I0819 04:26:31.113232   17537 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:26:31.113236   17537 status.go:343] host is not running, skipping remaining checks
	I0819 04:26:31.113241   17537 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.788625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-746000 stop: (3.41105275s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status: exit status 7 (70.012708ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr: exit status 7 (32.737416ms)

                                                
                                                
-- stdout --
	multinode-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:34.657860   17563 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:34.658000   17563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:34.658003   17563 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:34.658006   17563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:34.658128   17563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:26:34.658242   17563 out.go:352] Setting JSON to false
	I0819 04:26:34.658253   17563 mustload.go:65] Loading cluster: multinode-746000
	I0819 04:26:34.658318   17563 notify.go:220] Checking for updates...
	I0819 04:26:34.658455   17563 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:34.658461   17563 status.go:255] checking status of multinode-746000 ...
	I0819 04:26:34.658662   17563 status.go:330] multinode-746000 host status = "Stopped" (err=<nil>)
	I0819 04:26:34.658665   17563 status.go:343] host is not running, skipping remaining checks
	I0819 04:26:34.658667   17563 status.go:257] multinode-746000 status: &{Name:multinode-746000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr": multinode-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-746000 status --alsologtostderr": multinode-746000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (30.726417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-746000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-746000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181542292s)

                                                
                                                
-- stdout --
	* [multinode-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-746000" primary control-plane node in "multinode-746000" cluster
	* Restarting existing qemu2 VM for "multinode-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:26:34.718394   17567 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:26:34.718507   17567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:34.718510   17567 out.go:358] Setting ErrFile to fd 2...
	I0819 04:26:34.718513   17567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:26:34.718637   17567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:26:34.719650   17567 out.go:352] Setting JSON to false
	I0819 04:26:34.735585   17567 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8762,"bootTime":1724058032,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:26:34.735657   17567 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:26:34.740675   17567 out.go:177] * [multinode-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:26:34.747552   17567 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:26:34.747617   17567 notify.go:220] Checking for updates...
	I0819 04:26:34.755529   17567 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:26:34.758533   17567 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:26:34.761552   17567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:26:34.764527   17567 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:26:34.767594   17567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:26:34.770795   17567 config.go:182] Loaded profile config "multinode-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:26:34.771061   17567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:26:34.775463   17567 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:26:34.782568   17567 start.go:297] selected driver: qemu2
	I0819 04:26:34.782577   17567 start.go:901] validating driver "qemu2" against &{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:34.782660   17567 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:26:34.784930   17567 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:26:34.784959   17567 cni.go:84] Creating CNI manager for ""
	I0819 04:26:34.784964   17567 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 04:26:34.785015   17567 start.go:340] cluster config:
	{Name:multinode-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-746000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:26:34.788516   17567 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:26:34.796480   17567 out.go:177] * Starting "multinode-746000" primary control-plane node in "multinode-746000" cluster
	I0819 04:26:34.799426   17567 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:26:34.799444   17567 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:26:34.799458   17567 cache.go:56] Caching tarball of preloaded images
	I0819 04:26:34.799527   17567 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:26:34.799533   17567 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:26:34.799603   17567 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/multinode-746000/config.json ...
	I0819 04:26:34.800050   17567 start.go:360] acquireMachinesLock for multinode-746000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:34.800079   17567 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "multinode-746000"
	I0819 04:26:34.800090   17567 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:34.800096   17567 fix.go:54] fixHost starting: 
	I0819 04:26:34.800220   17567 fix.go:112] recreateIfNeeded on multinode-746000: state=Stopped err=<nil>
	W0819 04:26:34.800228   17567 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:34.804541   17567 out.go:177] * Restarting existing qemu2 VM for "multinode-746000" ...
	I0819 04:26:34.811507   17567 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:34.811546   17567 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b1:6f:6a:b7:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:26:34.813705   17567 main.go:141] libmachine: STDOUT: 
	I0819 04:26:34.813726   17567 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:34.813758   17567 fix.go:56] duration metric: took 13.662625ms for fixHost
	I0819 04:26:34.813762   17567 start.go:83] releasing machines lock for "multinode-746000", held for 13.678167ms
	W0819 04:26:34.813769   17567 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:34.813822   17567 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:34.813827   17567 start.go:729] Will try again in 5 seconds ...
	I0819 04:26:39.814868   17567 start.go:360] acquireMachinesLock for multinode-746000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:26:39.815301   17567 start.go:364] duration metric: took 329.708µs to acquireMachinesLock for "multinode-746000"
	I0819 04:26:39.815432   17567 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:26:39.815452   17567 fix.go:54] fixHost starting: 
	I0819 04:26:39.816296   17567 fix.go:112] recreateIfNeeded on multinode-746000: state=Stopped err=<nil>
	W0819 04:26:39.816326   17567 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:26:39.821764   17567 out.go:177] * Restarting existing qemu2 VM for "multinode-746000" ...
	I0819 04:26:39.828696   17567 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:26:39.828930   17567 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:b1:6f:6a:b7:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/multinode-746000/disk.qcow2
	I0819 04:26:39.837781   17567 main.go:141] libmachine: STDOUT: 
	I0819 04:26:39.837854   17567 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:26:39.837938   17567 fix.go:56] duration metric: took 22.486292ms for fixHost
	I0819 04:26:39.837959   17567 start.go:83] releasing machines lock for "multinode-746000", held for 22.629875ms
	W0819 04:26:39.838190   17567 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:26:39.844764   17567 out.go:201] 
	W0819 04:26:39.848784   17567 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:26:39.848806   17567 out.go:270] * 
	* 
	W0819 04:26:39.851365   17567 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:26:39.858679   17567 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-746000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (71.012375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-746000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-746000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-746000-m01 --driver=qemu2 : exit status 80 (10.471646542s)

                                                
                                                
-- stdout --
	* [multinode-746000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-746000-m01" primary control-plane node in "multinode-746000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-746000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-746000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-746000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-746000-m02 --driver=qemu2 : exit status 80 (10.835650625s)

                                                
                                                
-- stdout --
	* [multinode-746000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-746000-m02" primary control-plane node in "multinode-746000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-746000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-746000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-746000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-746000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-746000: exit status 83 (79.31625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-746000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-746000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-746000 -n multinode-746000: exit status 7 (31.509792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (21.53s)

                                                
                                    
x
+
TestPreload (10.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-806000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-806000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.969340209s)

                                                
                                                
-- stdout --
	* [test-preload-806000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-806000" primary control-plane node in "test-preload-806000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-806000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:27:01.630669   17619 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:27:01.630792   17619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:01.630796   17619 out.go:358] Setting ErrFile to fd 2...
	I0819 04:27:01.630798   17619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:27:01.630925   17619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:27:01.631920   17619 out.go:352] Setting JSON to false
	I0819 04:27:01.649112   17619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8789,"bootTime":1724058032,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:27:01.649189   17619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:27:01.655797   17619 out.go:177] * [test-preload-806000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:27:01.663708   17619 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:27:01.663756   17619 notify.go:220] Checking for updates...
	I0819 04:27:01.672575   17619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:27:01.675713   17619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:27:01.679770   17619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:27:01.681167   17619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:27:01.684691   17619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:27:01.688087   17619 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:27:01.688138   17619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:27:01.689801   17619 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:27:01.696738   17619 start.go:297] selected driver: qemu2
	I0819 04:27:01.696745   17619 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:27:01.696750   17619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:27:01.699123   17619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:27:01.703535   17619 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:27:01.706790   17619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:27:01.706809   17619 cni.go:84] Creating CNI manager for ""
	I0819 04:27:01.706817   17619 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:27:01.706829   17619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:27:01.706866   17619 start.go:340] cluster config:
	{Name:test-preload-806000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:27:01.710718   17619 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.718720   17619 out.go:177] * Starting "test-preload-806000" primary control-plane node in "test-preload-806000" cluster
	I0819 04:27:01.722660   17619 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0819 04:27:01.722738   17619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/test-preload-806000/config.json ...
	I0819 04:27:01.722753   17619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/test-preload-806000/config.json: {Name:mk2d73fe1c1dedf7ed97ab8b4494ed5fcf46718d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:27:01.722750   17619 cache.go:107] acquiring lock: {Name:mkdb4a901b1d383102161da2a6c0c3197f0db761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.722750   17619 cache.go:107] acquiring lock: {Name:mkd4475594843dd4de7429152fdf77c97e08efb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.722757   17619 cache.go:107] acquiring lock: {Name:mk40fb6659e522f4d05158cf14ae2801faeba703 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.722782   17619 cache.go:107] acquiring lock: {Name:mka353155a5617893c139c57195b52a3199b99e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.722928   17619 cache.go:107] acquiring lock: {Name:mke52694fcb5c0afa77ca8e91b7f84fb518ed6ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.723006   17619 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 04:27:01.723030   17619 start.go:360] acquireMachinesLock for test-preload-806000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:01.723038   17619 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:27:01.723070   17619 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "test-preload-806000"
	I0819 04:27:01.723084   17619 start.go:93] Provisioning new machine with config: &{Name:test-preload-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:01.723115   17619 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:01.723101   17619 cache.go:107] acquiring lock: {Name:mk6b898dcd99bb96005cf5d76b0c2512dba653ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.723009   17619 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 04:27:01.723133   17619 cache.go:107] acquiring lock: {Name:mk25604b2e2df38142234187801934140a6c4efe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.723181   17619 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:27:01.723242   17619 cache.go:107] acquiring lock: {Name:mk742958d3f3523b395cef7a190237a6c3513b6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:27:01.723582   17619 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:27:01.723598   17619 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 04:27:01.723583   17619 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:27:01.727697   17619 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:27:01.728299   17619 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 04:27:01.735661   17619 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:27:01.735747   17619 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 04:27:01.735789   17619 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:27:01.735852   17619 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:27:01.735931   17619 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 04:27:01.735995   17619 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:27:01.736134   17619 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 04:27:01.736920   17619 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 04:27:01.746310   17619 start.go:159] libmachine.API.Create for "test-preload-806000" (driver="qemu2")
	I0819 04:27:01.746332   17619 client.go:168] LocalClient.Create starting
	I0819 04:27:01.746399   17619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:01.746429   17619 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:01.746438   17619 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:01.746480   17619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:01.746504   17619 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:01.746514   17619 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:01.746865   17619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:02.005234   17619 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:02.069926   17619 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:02.069946   17619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:02.070193   17619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2
	I0819 04:27:02.079672   17619 main.go:141] libmachine: STDOUT: 
	I0819 04:27:02.079686   17619 main.go:141] libmachine: STDERR: 
	I0819 04:27:02.079730   17619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2 +20000M
	I0819 04:27:02.088399   17619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:02.088413   17619 main.go:141] libmachine: STDERR: 
	I0819 04:27:02.088424   17619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2
	I0819 04:27:02.088428   17619 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:02.088436   17619 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:02.088458   17619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b4:b5:8a:ab:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2
	I0819 04:27:02.090300   17619 main.go:141] libmachine: STDOUT: 
	I0819 04:27:02.090323   17619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:02.090340   17619 client.go:171] duration metric: took 344.013583ms to LocalClient.Create
	I0819 04:27:02.125615   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:27:02.133902   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:27:02.134498   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0819 04:27:02.138109   17619 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:27:02.138130   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:27:02.163682   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 04:27:02.217424   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 04:27:02.270880   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0819 04:27:02.280245   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0819 04:27:02.280268   17619 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 557.495834ms
	I0819 04:27:02.280289   17619 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0819 04:27:02.511204   17619 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:27:02.511295   17619 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:27:02.763239   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 04:27:02.763280   17619 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.040554166s
	I0819 04:27:02.763306   17619 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 04:27:03.185638   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0819 04:27:03.185684   17619 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.462807041s
	I0819 04:27:03.185731   17619 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0819 04:27:04.090500   17619 start.go:128] duration metric: took 2.367416875s to createHost
	I0819 04:27:04.090551   17619 start.go:83] releasing machines lock for "test-preload-806000", held for 2.367526125s
	W0819 04:27:04.090620   17619 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:04.101515   17619 out.go:177] * Deleting "test-preload-806000" in qemu2 ...
	W0819 04:27:04.132388   17619 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:04.132415   17619 start.go:729] Will try again in 5 seconds ...
	I0819 04:27:04.194341   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0819 04:27:04.194433   17619 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.471419292s
	I0819 04:27:04.194463   17619 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0819 04:27:06.433602   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0819 04:27:06.433651   17619 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.711001583s
	I0819 04:27:06.433680   17619 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0819 04:27:08.073292   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0819 04:27:08.073344   17619 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.350740209s
	I0819 04:27:08.073367   17619 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0819 04:27:09.132574   17619 start.go:360] acquireMachinesLock for test-preload-806000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:27:09.133015   17619 start.go:364] duration metric: took 358.292µs to acquireMachinesLock for "test-preload-806000"
	I0819 04:27:09.133128   17619 start.go:93] Provisioning new machine with config: &{Name:test-preload-806000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-806000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:27:09.133366   17619 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:27:09.141100   17619 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:27:09.195055   17619 start.go:159] libmachine.API.Create for "test-preload-806000" (driver="qemu2")
	I0819 04:27:09.195105   17619 client.go:168] LocalClient.Create starting
	I0819 04:27:09.195234   17619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:27:09.195300   17619 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:09.195325   17619 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:09.195412   17619 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:27:09.195457   17619 main.go:141] libmachine: Decoding PEM data...
	I0819 04:27:09.195475   17619 main.go:141] libmachine: Parsing certificate...
	I0819 04:27:09.196002   17619 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:27:09.269761   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0819 04:27:09.269781   17619 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.546863292s
	I0819 04:27:09.269787   17619 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0819 04:27:09.417564   17619 main.go:141] libmachine: Creating SSH key...
	I0819 04:27:09.498605   17619 main.go:141] libmachine: Creating Disk image...
	I0819 04:27:09.498611   17619 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:27:09.498832   17619 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2
	I0819 04:27:09.508021   17619 main.go:141] libmachine: STDOUT: 
	I0819 04:27:09.508039   17619 main.go:141] libmachine: STDERR: 
	I0819 04:27:09.508094   17619 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2 +20000M
	I0819 04:27:09.516272   17619 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:27:09.516302   17619 main.go:141] libmachine: STDERR: 
	I0819 04:27:09.516317   17619 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2
	I0819 04:27:09.516331   17619 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:27:09.516340   17619 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:27:09.516374   17619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:bd:1c:48:84:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/test-preload-806000/disk.qcow2
	I0819 04:27:09.518115   17619 main.go:141] libmachine: STDOUT: 
	I0819 04:27:09.518132   17619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:27:09.518146   17619 client.go:171] duration metric: took 323.030791ms to LocalClient.Create
	I0819 04:27:10.897515   17619 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0819 04:27:10.897582   17619 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.174692042s
	I0819 04:27:10.897609   17619 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0819 04:27:10.897676   17619 cache.go:87] Successfully saved all images to host disk.
	I0819 04:27:11.520329   17619 start.go:128] duration metric: took 2.386979167s to createHost
	I0819 04:27:11.520387   17619 start.go:83] releasing machines lock for "test-preload-806000", held for 2.387388958s
	W0819 04:27:11.520662   17619 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-806000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:27:11.536141   17619 out.go:201] 
	W0819 04:27:11.540201   17619 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:27:11.540235   17619 out.go:270] * 
	* 
	W0819 04:27:11.542816   17619 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:27:11.555092   17619 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-806000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-19 04:27:11.573616 -0700 PDT m=+679.887658251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-806000 -n test-preload-806000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-806000 -n test-preload-806000: exit status 7 (66.387917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-806000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-806000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-806000
--- FAIL: TestPreload (10.12s)

                                                
                                    
x
+
TestScheduledStopUnix (10.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-086000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-086000 --memory=2048 --driver=qemu2 : exit status 80 (9.996966042s)

                                                
                                                
-- stdout --
	* [scheduled-stop-086000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-086000" primary control-plane node in "scheduled-stop-086000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-086000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-086000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-086000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-086000" primary control-plane node in "scheduled-stop-086000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-086000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-086000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-19 04:27:21.716563 -0700 PDT m=+690.030836251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-086000 -n scheduled-stop-086000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-086000 -n scheduled-stop-086000: exit status 7 (68.897958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-086000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-086000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-086000
--- FAIL: TestScheduledStopUnix (10.15s)

                                                
                                    
x
+
TestSkaffold (12.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe659838308 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe659838308 version: (1.0720755s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-486000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-486000 --memory=2600 --driver=qemu2 : exit status 80 (9.937511959s)

                                                
                                                
-- stdout --
	* [skaffold-486000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-486000" primary control-plane node in "skaffold-486000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-486000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-486000" primary control-plane node in "skaffold-486000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-486000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-486000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-19 04:27:34.392406 -0700 PDT m=+702.706967209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-486000 -n skaffold-486000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-486000 -n skaffold-486000: exit status 7 (62.169625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-486000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-486000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-486000
--- FAIL: TestSkaffold (12.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (590.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3854505483 start -p running-upgrade-038000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3854505483 start -p running-upgrade-038000 --memory=2200 --vm-driver=qemu2 : (52.10551425s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-038000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-038000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.770086167s)

                                                
                                                
-- stdout --
	* [running-upgrade-038000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-038000" primary control-plane node in "running-upgrade-038000" cluster
	* Updating the running qemu2 "running-upgrade-038000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:29:10.510905   17996 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:29:10.511043   17996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:29:10.511046   17996 out.go:358] Setting ErrFile to fd 2...
	I0819 04:29:10.511048   17996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:29:10.511187   17996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:29:10.512226   17996 out.go:352] Setting JSON to false
	I0819 04:29:10.528711   17996 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8918,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:29:10.528782   17996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:29:10.534117   17996 out.go:177] * [running-upgrade-038000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:29:10.542124   17996 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:29:10.542158   17996 notify.go:220] Checking for updates...
	I0819 04:29:10.549112   17996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:29:10.553121   17996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:29:10.556172   17996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:29:10.559104   17996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:29:10.562162   17996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:29:10.565432   17996 config.go:182] Loaded profile config "running-upgrade-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:29:10.569088   17996 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:29:10.570310   17996 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:29:10.574099   17996 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:29:10.580963   17996 start.go:297] selected driver: qemu2
	I0819 04:29:10.580969   17996 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:29:10.581031   17996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:29:10.583471   17996 cni.go:84] Creating CNI manager for ""
	I0819 04:29:10.583499   17996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:29:10.583530   17996 start.go:340] cluster config:
	{Name:running-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:29:10.583575   17996 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:29:10.591100   17996 out.go:177] * Starting "running-upgrade-038000" primary control-plane node in "running-upgrade-038000" cluster
	I0819 04:29:10.595062   17996 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:29:10.595079   17996 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 04:29:10.595084   17996 cache.go:56] Caching tarball of preloaded images
	I0819 04:29:10.595134   17996 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:29:10.595140   17996 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 04:29:10.595191   17996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/config.json ...
	I0819 04:29:10.595602   17996 start.go:360] acquireMachinesLock for running-upgrade-038000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:29:10.595631   17996 start.go:364] duration metric: took 23.083µs to acquireMachinesLock for "running-upgrade-038000"
	I0819 04:29:10.595641   17996 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:29:10.595647   17996 fix.go:54] fixHost starting: 
	I0819 04:29:10.596279   17996 fix.go:112] recreateIfNeeded on running-upgrade-038000: state=Running err=<nil>
	W0819 04:29:10.596288   17996 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:29:10.605073   17996 out.go:177] * Updating the running qemu2 "running-upgrade-038000" VM ...
	I0819 04:29:10.609161   17996 machine.go:93] provisionDockerMachine start ...
	I0819 04:29:10.609223   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:10.609370   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:10.609375   17996 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 04:29:10.661210   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-038000
	
	I0819 04:29:10.661220   17996 buildroot.go:166] provisioning hostname "running-upgrade-038000"
	I0819 04:29:10.661265   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:10.661359   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:10.661367   17996 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-038000 && echo "running-upgrade-038000" | sudo tee /etc/hostname
	I0819 04:29:10.715127   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-038000
	
	I0819 04:29:10.715181   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:10.715303   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:10.715311   17996 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-038000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-038000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-038000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 04:29:10.767038   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:29:10.767050   17996 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19479-15750/.minikube CaCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19479-15750/.minikube}
	I0819 04:29:10.767063   17996 buildroot.go:174] setting up certificates
	I0819 04:29:10.767070   17996 provision.go:84] configureAuth start
	I0819 04:29:10.767075   17996 provision.go:143] copyHostCerts
	I0819 04:29:10.767144   17996 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem, removing ...
	I0819 04:29:10.767148   17996 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem
	I0819 04:29:10.767269   17996 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem (1675 bytes)
	I0819 04:29:10.767444   17996 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem, removing ...
	I0819 04:29:10.767447   17996 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem
	I0819 04:29:10.767491   17996 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem (1082 bytes)
	I0819 04:29:10.767604   17996 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem, removing ...
	I0819 04:29:10.767607   17996 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem
	I0819 04:29:10.767651   17996 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem (1123 bytes)
	I0819 04:29:10.767752   17996 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-038000 san=[127.0.0.1 localhost minikube running-upgrade-038000]
	I0819 04:29:10.829097   17996 provision.go:177] copyRemoteCerts
	I0819 04:29:10.829126   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 04:29:10.829133   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:29:10.855471   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 04:29:10.862627   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 04:29:10.870753   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 04:29:10.877110   17996 provision.go:87] duration metric: took 110.037125ms to configureAuth
	I0819 04:29:10.877120   17996 buildroot.go:189] setting minikube options for container-runtime
	I0819 04:29:10.877231   17996 config.go:182] Loaded profile config "running-upgrade-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:29:10.877269   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:10.877356   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:10.877361   17996 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 04:29:10.926969   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 04:29:10.926977   17996 buildroot.go:70] root file system type: tmpfs
	I0819 04:29:10.927023   17996 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 04:29:10.927066   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:10.927169   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:10.927202   17996 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 04:29:10.979609   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 04:29:10.979683   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:10.979803   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:10.979814   17996 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 04:29:11.033105   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:29:11.033112   17996 machine.go:96] duration metric: took 423.955208ms to provisionDockerMachine
	I0819 04:29:11.033117   17996 start.go:293] postStartSetup for "running-upgrade-038000" (driver="qemu2")
	I0819 04:29:11.033123   17996 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 04:29:11.033168   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 04:29:11.033177   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:29:11.059778   17996 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 04:29:11.061038   17996 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 04:29:11.061046   17996 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19479-15750/.minikube/addons for local assets ...
	I0819 04:29:11.061110   17996 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19479-15750/.minikube/files for local assets ...
	I0819 04:29:11.061195   17996 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem -> 162402.pem in /etc/ssl/certs
	I0819 04:29:11.061285   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 04:29:11.066634   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem --> /etc/ssl/certs/162402.pem (1708 bytes)
	I0819 04:29:11.073388   17996 start.go:296] duration metric: took 40.266417ms for postStartSetup
	I0819 04:29:11.073402   17996 fix.go:56] duration metric: took 477.767333ms for fixHost
	I0819 04:29:11.073440   17996 main.go:141] libmachine: Using SSH client type: native
	I0819 04:29:11.073550   17996 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a205a0] 0x102a22e00 <nil>  [] 0s} localhost 53156 <nil> <nil>}
	I0819 04:29:11.073556   17996 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 04:29:11.125231   17996 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724066950.785291305
	
	I0819 04:29:11.125239   17996 fix.go:216] guest clock: 1724066950.785291305
	I0819 04:29:11.125243   17996 fix.go:229] Guest: 2024-08-19 04:29:10.785291305 -0700 PDT Remote: 2024-08-19 04:29:11.073403 -0700 PDT m=+0.582576043 (delta=-288.111695ms)
	I0819 04:29:11.125255   17996 fix.go:200] guest clock delta is within tolerance: -288.111695ms
	I0819 04:29:11.125257   17996 start.go:83] releasing machines lock for "running-upgrade-038000", held for 529.634167ms
	I0819 04:29:11.125316   17996 ssh_runner.go:195] Run: cat /version.json
	I0819 04:29:11.125323   17996 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 04:29:11.125324   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:29:11.125337   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	W0819 04:29:11.125862   17996 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53156: connect: connection refused
	I0819 04:29:11.125889   17996 retry.go:31] will retry after 180.293927ms: dial tcp [::1]:53156: connect: connection refused
	W0819 04:29:11.335536   17996 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 04:29:11.335606   17996 ssh_runner.go:195] Run: systemctl --version
	I0819 04:29:11.337482   17996 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 04:29:11.339154   17996 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 04:29:11.339186   17996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 04:29:11.341839   17996 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 04:29:11.346226   17996 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 04:29:11.346236   17996 start.go:495] detecting cgroup driver to use...
	I0819 04:29:11.346334   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:29:11.351625   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 04:29:11.354487   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 04:29:11.357744   17996 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 04:29:11.357772   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 04:29:11.361264   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:29:11.364798   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 04:29:11.368303   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:29:11.371307   17996 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 04:29:11.374277   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 04:29:11.377674   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 04:29:11.381232   17996 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 04:29:11.384803   17996 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 04:29:11.387502   17996 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 04:29:11.390198   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:29:11.467749   17996 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 04:29:11.474609   17996 start.go:495] detecting cgroup driver to use...
	I0819 04:29:11.474683   17996 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 04:29:11.484466   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:29:11.489142   17996 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 04:29:11.497205   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:29:11.502427   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:29:11.507426   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:29:11.512924   17996 ssh_runner.go:195] Run: which cri-dockerd
	I0819 04:29:11.514275   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 04:29:11.516985   17996 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 04:29:11.522032   17996 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 04:29:11.618810   17996 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 04:29:11.700366   17996 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 04:29:11.700433   17996 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 04:29:11.705795   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:29:11.797260   17996 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:29:14.537615   17996 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.740398833s)
	I0819 04:29:14.537672   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 04:29:14.542503   17996 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 04:29:14.548858   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:29:14.554569   17996 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 04:29:14.640414   17996 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 04:29:14.725859   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:29:14.790116   17996 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 04:29:14.796585   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:29:14.801361   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:29:14.870833   17996 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 04:29:14.908965   17996 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 04:29:14.909038   17996 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 04:29:14.911797   17996 start.go:563] Will wait 60s for crictl version
	I0819 04:29:14.911846   17996 ssh_runner.go:195] Run: which crictl
	I0819 04:29:14.913227   17996 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 04:29:14.925472   17996 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 04:29:14.925535   17996 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:29:14.938531   17996 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:29:14.960147   17996 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 04:29:14.960263   17996 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 04:29:14.961727   17996 kubeadm.go:883] updating cluster {Name:running-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 04:29:14.961771   17996 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:29:14.961805   17996 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:29:14.975465   17996 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:29:14.975473   17996 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:29:14.975513   17996 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:29:14.978365   17996 ssh_runner.go:195] Run: which lz4
	I0819 04:29:14.979714   17996 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 04:29:14.981007   17996 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 04:29:14.981016   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 04:29:15.929144   17996 docker.go:649] duration metric: took 949.480834ms to copy over tarball
	I0819 04:29:15.929204   17996 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 04:29:17.074108   17996 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.144916083s)
	I0819 04:29:17.074124   17996 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 04:29:17.090048   17996 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:29:17.093629   17996 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 04:29:17.098664   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:29:17.166173   17996 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:29:18.390469   17996 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.22430825s)
	I0819 04:29:18.390558   17996 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:29:18.409056   17996 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:29:18.409066   17996 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:29:18.409077   17996 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 04:29:18.412941   17996 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:29:18.414846   17996 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:29:18.416143   17996 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:29:18.416287   17996 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:29:18.419257   17996 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:29:18.419289   17996 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:29:18.421231   17996 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:29:18.421653   17996 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:29:18.422386   17996 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:29:18.422480   17996 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:29:18.423974   17996 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:29:18.423993   17996 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:29:18.424953   17996 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:29:18.424979   17996 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:29:18.425799   17996 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:29:18.426415   17996 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:29:18.854726   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:29:18.859608   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:29:18.863137   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:29:18.879675   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:29:18.883997   17996 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 04:29:18.884022   17996 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:29:18.884066   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:29:18.885496   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 04:29:18.886067   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 04:29:18.906428   17996 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 04:29:18.906454   17996 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:29:18.906504   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:29:18.910855   17996 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 04:29:18.910876   17996 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:29:18.910926   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:29:18.916032   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 04:29:18.916667   17996 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 04:29:18.916684   17996 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:29:18.916731   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0819 04:29:18.920354   17996 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:29:18.920476   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:29:18.928077   17996 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 04:29:18.928099   17996 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 04:29:18.928077   17996 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 04:29:18.928157   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 04:29:18.928213   17996 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:29:18.928243   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 04:29:18.937595   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 04:29:18.951657   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 04:29:18.959662   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 04:29:18.960147   17996 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 04:29:18.960163   17996 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:29:18.960207   17996 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:29:18.968433   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:29:18.968453   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:29:18.968562   17996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 04:29:18.968563   17996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 04:29:18.971045   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:29:18.971127   17996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:29:18.972667   17996 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 04:29:18.972680   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 04:29:18.972797   17996 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 04:29:18.972809   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 04:29:18.972867   17996 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 04:29:18.972879   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 04:29:18.988149   17996 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 04:29:18.988168   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 04:29:19.092951   17996 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 04:29:19.092972   17996 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:29:19.092982   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0819 04:29:19.123259   17996 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:29:19.123382   17996 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:29:19.179317   17996 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 04:29:19.179371   17996 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 04:29:19.179393   17996 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:29:19.179447   17996 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:29:19.307387   17996 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 04:29:19.307410   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 04:29:20.537880   17996 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.230449416s)
	I0819 04:29:20.537934   17996 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 04:29:20.538229   17996 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358799833s)
	I0819 04:29:20.538243   17996 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:29:20.538562   17996 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:29:20.542450   17996 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 04:29:20.542533   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 04:29:20.600805   17996 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:29:20.600820   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 04:29:20.836836   17996 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 04:29:20.836875   17996 cache_images.go:92] duration metric: took 2.4278425s to LoadCachedImages
	W0819 04:29:20.836905   17996 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0819 04:29:20.836909   17996 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 04:29:20.836975   17996 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-038000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 04:29:20.837038   17996 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 04:29:20.850484   17996 cni.go:84] Creating CNI manager for ""
	I0819 04:29:20.850496   17996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:29:20.850504   17996 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 04:29:20.850515   17996 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-038000 NodeName:running-upgrade-038000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 04:29:20.850583   17996 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-038000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 04:29:20.850648   17996 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 04:29:20.853576   17996 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 04:29:20.853608   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 04:29:20.856763   17996 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 04:29:20.861842   17996 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 04:29:20.866946   17996 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 04:29:20.871793   17996 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 04:29:20.873209   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:29:20.955760   17996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:29:20.961112   17996 certs.go:68] Setting up /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000 for IP: 10.0.2.15
	I0819 04:29:20.961119   17996 certs.go:194] generating shared ca certs ...
	I0819 04:29:20.961126   17996 certs.go:226] acquiring lock for ca certs: {Name:mk35a9cd01f436a7a54821e5f775d6ab16b5867a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:29:20.961361   17996 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.key
	I0819 04:29:20.961397   17996 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.key
	I0819 04:29:20.961402   17996 certs.go:256] generating profile certs ...
	I0819 04:29:20.961467   17996 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.key
	I0819 04:29:20.961478   17996 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.key.1456ef9a
	I0819 04:29:20.961489   17996 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.crt.1456ef9a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 04:29:21.025271   17996 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.crt.1456ef9a ...
	I0819 04:29:21.025277   17996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.crt.1456ef9a: {Name:mkd8edc33e6421edcabe5e7e873acb88e9314973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:29:21.025520   17996 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.key.1456ef9a ...
	I0819 04:29:21.025527   17996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.key.1456ef9a: {Name:mkd0022d803726cc3eb82bd06c5cca6c001055fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:29:21.025660   17996 certs.go:381] copying /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.crt.1456ef9a -> /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.crt
	I0819 04:29:21.025782   17996 certs.go:385] copying /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.key.1456ef9a -> /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.key
	I0819 04:29:21.025903   17996 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/proxy-client.key
	I0819 04:29:21.026018   17996 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240.pem (1338 bytes)
	W0819 04:29:21.026042   17996 certs.go:480] ignoring /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240_empty.pem, impossibly tiny 0 bytes
	I0819 04:29:21.026047   17996 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 04:29:21.026073   17996 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem (1082 bytes)
	I0819 04:29:21.026091   17996 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem (1123 bytes)
	I0819 04:29:21.026108   17996 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem (1675 bytes)
	I0819 04:29:21.026150   17996 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem (1708 bytes)
	I0819 04:29:21.026539   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 04:29:21.033654   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 04:29:21.041058   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 04:29:21.048714   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 04:29:21.056167   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 04:29:21.063650   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 04:29:21.070448   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 04:29:21.077550   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 04:29:21.084936   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem --> /usr/share/ca-certificates/162402.pem (1708 bytes)
	I0819 04:29:21.091969   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 04:29:21.098524   17996 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240.pem --> /usr/share/ca-certificates/16240.pem (1338 bytes)
	I0819 04:29:21.105542   17996 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 04:29:21.110728   17996 ssh_runner.go:195] Run: openssl version
	I0819 04:29:21.112440   17996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16240.pem && ln -fs /usr/share/ca-certificates/16240.pem /etc/ssl/certs/16240.pem"
	I0819 04:29:21.115367   17996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16240.pem
	I0819 04:29:21.116736   17996 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:16 /usr/share/ca-certificates/16240.pem
	I0819 04:29:21.116754   17996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16240.pem
	I0819 04:29:21.118793   17996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16240.pem /etc/ssl/certs/51391683.0"
	I0819 04:29:21.121664   17996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162402.pem && ln -fs /usr/share/ca-certificates/162402.pem /etc/ssl/certs/162402.pem"
	I0819 04:29:21.125061   17996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162402.pem
	I0819 04:29:21.126468   17996 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:16 /usr/share/ca-certificates/162402.pem
	I0819 04:29:21.126488   17996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162402.pem
	I0819 04:29:21.128374   17996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162402.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 04:29:21.131087   17996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 04:29:21.134001   17996 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:29:21.135647   17996 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:29:21.135670   17996 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:29:21.137627   17996 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 04:29:21.141100   17996 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 04:29:21.143270   17996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 04:29:21.145179   17996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 04:29:21.147025   17996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 04:29:21.148750   17996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 04:29:21.150952   17996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 04:29:21.152769   17996 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 04:29:21.154506   17996 kubeadm.go:392] StartCluster: {Name:running-upgrade-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53188 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:29:21.154583   17996 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:29:21.165031   17996 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 04:29:21.169096   17996 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 04:29:21.169102   17996 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 04:29:21.169126   17996 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 04:29:21.172438   17996 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:29:21.172477   17996 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-038000" does not appear in /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:29:21.172497   17996 kubeconfig.go:62] /Users/jenkins/minikube-integration/19479-15750/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-038000" cluster setting kubeconfig missing "running-upgrade-038000" context setting]
	I0819 04:29:21.172687   17996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:29:21.173626   17996 kapi.go:59] client config for running-upgrade-038000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fd9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:29:21.174541   17996 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 04:29:21.180028   17996 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-038000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 04:29:21.180037   17996 kubeadm.go:1160] stopping kube-system containers ...
	I0819 04:29:21.180101   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:29:21.199479   17996 docker.go:483] Stopping containers: [b5672bb55b22 f91def886d2d 0a241db04354 e95e8a21b94f a4f14f99ca97 e4bff7533378 8d34cceee46f e52d8d5fe97d 8f857b9db64e 624b0550691c fcdaf19afe69 7dbe7943b681 e180c075cfb9 787e4384d8d6]
	I0819 04:29:21.199547   17996 ssh_runner.go:195] Run: docker stop b5672bb55b22 f91def886d2d 0a241db04354 e95e8a21b94f a4f14f99ca97 e4bff7533378 8d34cceee46f e52d8d5fe97d 8f857b9db64e 624b0550691c fcdaf19afe69 7dbe7943b681 e180c075cfb9 787e4384d8d6
	I0819 04:29:21.211575   17996 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 04:29:21.309320   17996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:29:21.313771   17996 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 19 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 19 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 19 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 19 11:28 /etc/kubernetes/scheduler.conf
	
	I0819 04:29:21.313801   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/admin.conf
	I0819 04:29:21.317339   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:29:21.317366   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:29:21.320620   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/kubelet.conf
	I0819 04:29:21.323956   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:29:21.323978   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:29:21.326851   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/controller-manager.conf
	I0819 04:29:21.329504   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:29:21.329526   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:29:21.332729   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/scheduler.conf
	I0819 04:29:21.335603   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:29:21.335624   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:29:21.338363   17996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:29:21.341576   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:29:21.363289   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:29:22.083079   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:29:22.263798   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:29:22.291821   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:29:22.313646   17996 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:29:22.313733   17996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:29:22.816085   17996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:29:23.315859   17996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:29:23.320353   17996 api_server.go:72] duration metric: took 1.006732125s to wait for apiserver process to appear ...
	I0819 04:29:23.320362   17996 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:29:23.320389   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:28.322466   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:28.322511   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:33.322968   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:33.323052   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:38.323873   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:38.323936   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:43.324740   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:43.324823   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:48.326280   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:48.326363   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:53.328598   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:53.328690   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:29:58.331164   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:29:58.331249   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:03.333884   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:03.333951   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:08.336408   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:08.336495   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:13.339095   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:13.339175   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:18.341793   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:18.341876   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:23.342798   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:23.343042   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:30:23.362160   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:30:23.362282   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:30:23.376526   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:30:23.376604   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:30:23.388263   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:30:23.388338   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:30:23.399075   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:30:23.399142   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:30:23.409366   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:30:23.409435   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:30:23.419452   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:30:23.419523   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:30:23.429438   17996 logs.go:276] 0 containers: []
	W0819 04:30:23.429449   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:30:23.429503   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:30:23.439877   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:30:23.439901   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:30:23.439906   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:30:23.465073   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:30:23.465082   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:30:23.534849   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:30:23.534860   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:30:23.548917   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:30:23.548930   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:30:23.561148   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:30:23.561161   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:30:23.575096   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:30:23.575108   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:30:23.586514   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:30:23.586525   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:30:23.601650   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:30:23.601663   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:30:23.619027   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:30:23.619041   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:30:23.630275   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:30:23.630285   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:30:23.669394   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:30:23.669405   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:30:23.673577   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:30:23.673582   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:30:23.694752   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:30:23.694765   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:30:23.706722   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:30:23.706735   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:30:23.725005   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:30:23.725016   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:30:23.736617   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:30:23.736631   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:30:23.747638   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:30:23.747652   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:30:26.260485   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:31.262070   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:31.262425   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:30:31.290566   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:30:31.290694   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:30:31.308977   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:30:31.309065   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:30:31.322087   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:30:31.322163   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:30:31.338506   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:30:31.338586   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:30:31.348952   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:30:31.349019   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:30:31.359601   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:30:31.359662   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:30:31.369678   17996 logs.go:276] 0 containers: []
	W0819 04:30:31.369691   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:30:31.369749   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:30:31.380291   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:30:31.380309   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:30:31.380314   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:30:31.397838   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:30:31.397851   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:30:31.436295   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:30:31.436302   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:30:31.456320   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:30:31.456332   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:30:31.468235   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:30:31.468247   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:30:31.480503   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:30:31.480512   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:30:31.506923   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:30:31.506933   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:30:31.543215   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:30:31.543227   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:30:31.560852   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:30:31.560862   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:30:31.572733   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:30:31.572745   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:30:31.584189   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:30:31.584203   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:30:31.598088   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:30:31.598096   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:30:31.611444   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:30:31.611453   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:30:31.622705   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:30:31.622717   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:30:31.639997   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:30:31.640008   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:30:31.651435   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:30:31.651445   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:30:31.663218   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:30:31.663230   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:30:34.168745   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:39.171119   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:39.171576   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:30:39.211583   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:30:39.211746   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:30:39.233087   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:30:39.233202   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:30:39.251599   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:30:39.251669   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:30:39.263342   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:30:39.263418   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:30:39.273983   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:30:39.274045   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:30:39.288071   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:30:39.288133   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:30:39.298845   17996 logs.go:276] 0 containers: []
	W0819 04:30:39.298857   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:30:39.298933   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:30:39.309733   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:30:39.309756   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:30:39.309763   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:30:39.347027   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:30:39.347042   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:30:39.358741   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:30:39.358751   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:30:39.370775   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:30:39.370787   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:30:39.388832   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:30:39.388842   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:30:39.400457   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:30:39.400468   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:30:39.426434   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:30:39.426442   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:30:39.466466   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:30:39.466477   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:30:39.484031   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:30:39.484044   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:30:39.505840   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:30:39.505852   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:30:39.517583   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:30:39.517594   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:30:39.521842   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:30:39.521848   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:30:39.536158   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:30:39.536168   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:30:39.558801   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:30:39.558817   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:30:39.572202   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:30:39.572213   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:30:39.584045   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:30:39.584057   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:30:39.595898   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:30:39.595910   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:30:42.109076   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:47.111635   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:47.112078   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:30:47.154431   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:30:47.154556   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:30:47.179736   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:30:47.179829   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:30:47.192818   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:30:47.192891   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:30:47.204943   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:30:47.205015   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:30:47.215539   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:30:47.215602   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:30:47.226169   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:30:47.226230   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:30:47.236363   17996 logs.go:276] 0 containers: []
	W0819 04:30:47.236376   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:30:47.236431   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:30:47.248128   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:30:47.248146   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:30:47.248151   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:30:47.261873   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:30:47.261887   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:30:47.283626   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:30:47.283637   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:30:47.303039   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:30:47.303049   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:30:47.314391   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:30:47.314401   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:30:47.352440   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:30:47.352452   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:30:47.364859   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:30:47.364876   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:30:47.377817   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:30:47.377827   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:30:47.392472   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:30:47.392482   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:30:47.403781   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:30:47.403792   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:30:47.428754   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:30:47.428771   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:30:47.440494   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:30:47.440504   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:30:47.452601   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:30:47.452614   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:30:47.491002   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:30:47.491012   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:30:47.495691   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:30:47.495696   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:30:47.509934   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:30:47.509947   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:30:47.521668   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:30:47.521681   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:30:50.040931   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:30:55.043642   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:30:55.044091   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:30:55.082761   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:30:55.082887   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:30:55.104052   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:30:55.104147   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:30:55.118634   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:30:55.118716   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:30:55.131697   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:30:55.131774   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:30:55.142698   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:30:55.142768   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:30:55.153437   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:30:55.153501   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:30:55.163591   17996 logs.go:276] 0 containers: []
	W0819 04:30:55.163604   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:30:55.163660   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:30:55.173835   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:30:55.173852   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:30:55.173858   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:30:55.212296   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:30:55.212303   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:30:55.248065   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:30:55.248078   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:30:55.267645   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:30:55.267658   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:30:55.272270   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:30:55.272278   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:30:55.291393   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:30:55.291406   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:30:55.303243   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:30:55.303255   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:30:55.315429   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:30:55.315443   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:30:55.327310   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:30:55.327322   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:30:55.339461   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:30:55.339475   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:30:55.357384   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:30:55.357395   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:30:55.369426   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:30:55.369436   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:30:55.383428   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:30:55.383441   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:30:55.394871   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:30:55.394885   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:30:55.407130   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:30:55.407142   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:30:55.424444   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:30:55.424455   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:30:55.442567   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:30:55.442580   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:30:57.970162   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:02.972768   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:02.972999   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:02.991568   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:02.991653   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:03.004744   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:03.004818   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:03.016013   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:03.016086   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:03.026151   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:03.026217   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:03.036750   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:03.036824   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:03.047001   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:03.047065   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:03.057127   17996 logs.go:276] 0 containers: []
	W0819 04:31:03.057139   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:03.057200   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:03.067449   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:03.067468   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:03.067474   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:03.078935   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:03.078945   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:03.091085   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:03.091097   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:03.096066   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:03.096073   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:03.109685   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:03.109697   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:03.150093   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:03.150105   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:03.161313   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:03.161326   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:03.178115   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:03.178124   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:03.192723   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:03.192734   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:03.204232   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:03.204243   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:03.215440   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:03.215453   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:03.230197   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:03.230211   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:03.252378   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:03.252389   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:03.270234   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:03.270245   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:03.290943   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:03.290952   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:03.315271   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:03.315278   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:03.350332   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:03.350344   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:05.864749   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:10.867303   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:10.867647   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:10.898919   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:10.899047   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:10.922575   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:10.922679   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:10.936064   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:10.936138   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:10.947760   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:10.947839   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:10.958214   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:10.958275   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:10.968822   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:10.968891   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:10.978416   17996 logs.go:276] 0 containers: []
	W0819 04:31:10.978428   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:10.978486   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:10.988605   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:10.988624   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:10.988631   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:11.003124   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:11.003135   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:11.015231   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:11.015243   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:11.027861   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:11.027874   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:11.066378   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:11.066389   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:11.070489   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:11.070498   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:11.089872   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:11.089886   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:11.127171   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:11.127183   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:11.139433   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:11.139446   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:11.150785   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:11.150795   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:11.163007   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:11.163020   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:11.176710   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:11.176720   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:11.195684   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:11.195696   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:11.208433   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:11.208442   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:11.234949   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:11.234959   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:11.249753   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:11.249763   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:11.270973   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:11.270987   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:13.791008   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:18.793374   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:18.793635   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:18.818573   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:18.818692   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:18.838970   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:18.839053   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:18.851559   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:18.851628   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:18.862827   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:18.862901   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:18.873200   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:18.873267   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:18.883579   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:18.883642   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:18.893541   17996 logs.go:276] 0 containers: []
	W0819 04:31:18.893556   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:18.893605   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:18.903623   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:18.903643   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:18.903649   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:18.924063   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:18.924077   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:18.935615   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:18.935628   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:18.960982   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:18.960989   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:18.965005   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:18.965011   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:18.976863   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:18.976878   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:18.993075   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:18.993088   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:19.005904   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:19.005916   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:19.044351   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:19.044360   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:19.058586   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:19.058597   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:19.071222   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:19.071234   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:19.082240   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:19.082251   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:19.099674   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:19.099688   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:19.116143   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:19.116155   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:19.138153   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:19.138167   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:19.149558   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:19.149569   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:19.167387   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:19.167399   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:21.702050   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:26.704727   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:26.704988   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:26.734186   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:26.734296   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:26.755064   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:26.755144   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:26.774445   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:26.774505   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:26.788258   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:26.788328   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:26.798895   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:26.798950   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:26.809516   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:26.809584   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:26.819462   17996 logs.go:276] 0 containers: []
	W0819 04:31:26.819474   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:26.819524   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:26.830783   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:26.830798   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:26.830804   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:26.848613   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:26.848624   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:26.859990   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:26.860000   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:26.874492   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:26.874504   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:26.878855   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:26.878865   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:26.895256   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:26.895269   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:26.909259   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:26.909270   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:26.920579   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:26.920595   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:26.987408   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:26.987422   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:26.999527   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:26.999538   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:27.010786   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:27.010800   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:27.036428   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:27.036437   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:27.080757   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:27.080770   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:27.098183   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:27.098196   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:27.110166   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:27.110177   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:27.128547   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:27.128559   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:27.169278   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:27.169290   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:29.692524   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:34.694919   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:34.695350   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:34.734125   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:34.734265   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:34.758063   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:34.758180   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:34.773674   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:34.773752   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:34.785608   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:34.785679   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:34.796914   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:34.796975   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:34.809283   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:34.809337   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:34.820003   17996 logs.go:276] 0 containers: []
	W0819 04:31:34.820020   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:34.820079   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:34.830696   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:34.830717   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:34.830722   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:34.865010   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:34.865025   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:34.876392   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:34.876405   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:34.891431   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:34.891447   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:34.902824   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:34.902835   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:34.926885   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:34.926895   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:34.938920   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:34.938934   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:34.943760   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:34.943766   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:34.959042   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:34.959052   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:34.981802   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:34.981811   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:34.999780   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:34.999797   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:35.012964   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:35.012977   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:35.025604   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:35.025621   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:35.066756   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:35.066770   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:35.089040   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:35.089052   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:35.114181   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:35.114192   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:35.128055   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:35.128068   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:37.641233   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:42.643453   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:42.643936   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:42.685158   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:42.685296   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:42.706238   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:42.706350   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:42.721765   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:42.721846   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:42.734240   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:42.734310   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:42.747209   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:42.747278   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:42.758868   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:42.758934   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:42.769767   17996 logs.go:276] 0 containers: []
	W0819 04:31:42.769778   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:42.769830   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:42.780773   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:42.780791   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:42.780796   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:42.794751   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:42.794765   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:42.808758   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:42.808771   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:42.820631   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:42.820645   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:42.839877   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:42.839888   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:42.851908   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:42.851921   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:42.892077   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:42.892086   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:42.926402   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:42.926416   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:42.940742   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:42.940755   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:42.960564   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:42.960577   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:42.985889   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:42.985902   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:42.997741   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:42.997753   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:43.010130   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:43.010141   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:43.021921   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:43.021931   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:43.026667   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:43.026673   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:43.044451   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:43.044465   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:43.058996   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:43.059008   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:45.587323   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:50.589584   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:50.590033   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:50.629026   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:50.629165   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:50.650957   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:50.651076   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:50.665791   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:50.665854   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:50.684592   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:50.684654   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:50.697759   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:50.697832   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:50.708896   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:50.708964   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:50.719103   17996 logs.go:276] 0 containers: []
	W0819 04:31:50.719114   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:50.719164   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:50.729756   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:50.729774   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:50.729780   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:50.741162   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:50.741178   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:50.752394   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:50.752405   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:50.775550   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:50.775559   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:50.810242   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:50.810255   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:50.824339   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:50.824352   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:50.842113   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:50.842126   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:50.856172   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:50.856184   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:31:50.870663   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:50.870673   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:50.882370   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:50.882383   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:50.920861   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:50.920870   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:50.934404   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:50.934418   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:50.946148   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:50.946161   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:50.958154   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:50.958168   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:50.969747   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:50.969760   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:50.974721   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:50.974731   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:50.994538   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:50.994550   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:53.514295   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:31:58.516930   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:31:58.517067   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:31:58.528192   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:31:58.528262   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:31:58.538976   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:31:58.539053   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:31:58.555823   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:31:58.555910   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:31:58.566908   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:31:58.566978   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:31:58.578839   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:31:58.578930   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:31:58.590637   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:31:58.590719   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:31:58.601633   17996 logs.go:276] 0 containers: []
	W0819 04:31:58.601646   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:31:58.601719   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:31:58.613885   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:31:58.613904   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:31:58.613910   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:31:58.627026   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:31:58.627040   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:31:58.652175   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:31:58.652193   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:31:58.657277   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:31:58.657289   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:31:58.673014   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:31:58.673029   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:31:58.685708   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:31:58.685720   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:31:58.699568   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:31:58.699583   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:31:58.746698   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:31:58.746710   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:31:58.766054   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:31:58.766066   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:31:58.779370   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:31:58.779387   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:31:58.799136   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:31:58.799148   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:31:58.812607   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:31:58.812619   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:31:58.825118   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:31:58.825131   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:31:58.838299   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:31:58.838312   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:31:58.881663   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:31:58.881683   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:31:58.903764   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:31:58.903780   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:31:58.919455   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:31:58.919470   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:01.445869   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:06.448570   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:06.448687   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:06.459801   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:06.459881   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:06.471061   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:06.471124   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:06.482513   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:06.482590   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:06.493430   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:06.493509   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:06.504155   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:06.504229   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:06.515153   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:06.515221   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:06.525995   17996 logs.go:276] 0 containers: []
	W0819 04:32:06.526007   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:06.526067   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:06.537030   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:06.537051   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:06.537057   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:06.550986   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:06.550995   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:06.571642   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:06.571653   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:06.582820   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:06.582831   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:06.594874   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:06.594885   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:06.632955   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:06.632969   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:06.652181   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:06.652194   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:06.677342   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:06.677362   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:06.689901   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:06.689917   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:06.731409   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:06.731425   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:06.742944   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:06.742958   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:06.747481   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:06.747491   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:06.761922   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:06.761938   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:06.779212   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:06.779225   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:06.794663   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:06.794678   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:06.811758   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:06.811769   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:06.827070   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:06.827084   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:09.339191   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:14.340977   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:14.341148   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:14.359980   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:14.360082   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:14.376729   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:14.376834   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:14.388296   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:14.388370   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:14.398890   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:14.398954   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:14.412287   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:14.412358   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:14.423092   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:14.423169   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:14.433795   17996 logs.go:276] 0 containers: []
	W0819 04:32:14.433807   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:14.433868   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:14.444527   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:14.444545   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:14.444551   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:14.456472   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:14.456485   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:14.471023   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:14.471036   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:14.482513   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:14.482524   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:14.505605   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:14.505616   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:14.518669   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:14.518681   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:14.523303   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:14.523314   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:14.537268   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:14.537281   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:14.554116   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:14.554128   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:14.565485   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:14.565499   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:14.577101   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:14.577112   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:14.588674   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:14.588686   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:14.603763   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:14.603775   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:14.644293   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:14.644302   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:14.679405   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:14.679420   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:14.699362   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:14.699373   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:14.717849   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:14.717863   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:17.232205   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:22.234751   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:22.235011   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:22.279776   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:22.279878   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:22.301818   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:22.301909   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:22.328056   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:22.328125   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:22.339099   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:22.339179   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:22.349291   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:22.349355   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:22.359744   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:22.359820   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:22.370701   17996 logs.go:276] 0 containers: []
	W0819 04:32:22.370712   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:22.370774   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:22.386563   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:22.386582   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:22.386587   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:22.398098   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:22.398111   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:22.412678   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:22.412695   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:22.424236   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:22.424247   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:22.438988   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:22.439002   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:22.452536   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:22.452549   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:22.488017   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:22.488028   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:22.508657   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:22.508693   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:22.520632   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:22.520642   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:22.533917   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:22.533932   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:22.552301   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:22.552310   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:22.563375   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:22.563389   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:22.604754   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:22.604764   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:22.609640   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:22.609648   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:22.633433   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:22.633445   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:22.647770   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:22.647783   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:22.662273   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:22.662283   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:25.187633   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:30.190031   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:30.190434   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:30.227107   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:30.227238   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:30.252555   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:30.252644   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:30.266809   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:30.266884   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:30.279412   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:30.279496   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:30.290640   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:30.290707   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:30.301650   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:30.301719   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:30.312552   17996 logs.go:276] 0 containers: []
	W0819 04:32:30.312563   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:30.312612   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:30.323268   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:30.323295   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:30.323306   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:30.336291   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:30.336303   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:30.375820   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:30.375833   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:30.396198   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:30.396211   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:30.408825   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:30.408839   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:30.434340   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:30.434349   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:30.451492   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:30.451505   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:30.468150   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:30.468163   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:30.486093   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:30.486104   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:30.497403   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:30.497415   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:30.510675   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:30.510688   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:30.515076   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:30.515084   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:30.550454   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:30.550467   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:30.565311   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:30.565320   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:30.576856   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:30.576868   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:30.588084   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:30.588092   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:30.602432   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:30.602442   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:33.126078   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:38.128671   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:38.128874   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:38.148732   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:38.148810   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:38.159718   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:38.159802   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:38.170851   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:38.170928   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:38.185752   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:38.185816   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:38.196935   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:38.197004   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:38.207830   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:38.207899   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:38.217769   17996 logs.go:276] 0 containers: []
	W0819 04:32:38.217780   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:38.217835   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:38.228420   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:38.228438   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:38.228445   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:38.246908   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:38.246919   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:38.259071   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:38.259085   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:38.271781   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:38.271791   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:38.296219   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:38.296226   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:38.331514   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:38.331526   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:38.353982   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:38.353996   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:38.375361   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:38.375373   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:38.389750   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:38.389764   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:38.402322   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:38.402334   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:38.442449   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:38.442459   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:38.456571   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:38.456584   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:38.473527   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:38.473538   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:38.478784   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:38.478793   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:38.491196   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:38.491207   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:38.503039   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:38.503050   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:38.523586   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:38.523598   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:41.039804   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:46.042125   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:46.042597   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:46.082690   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:46.082832   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:46.104223   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:46.104324   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:46.121217   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:46.121298   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:46.133927   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:46.134006   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:46.144311   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:46.144375   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:46.158738   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:46.158810   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:46.169437   17996 logs.go:276] 0 containers: []
	W0819 04:32:46.169447   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:46.169501   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:46.179632   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:46.179651   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:46.179656   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:46.218666   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:46.218685   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:46.256649   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:46.256662   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:46.270659   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:46.270668   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:46.294415   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:46.294423   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:46.306256   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:46.306269   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:46.320928   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:46.320943   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:46.332734   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:46.332748   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:46.343754   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:46.343764   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:46.367898   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:46.367908   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:46.384746   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:46.384758   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:46.403574   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:46.403587   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:46.415949   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:46.415960   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:46.420220   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:46.420230   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:46.452043   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:46.452060   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:46.467482   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:46.467492   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:46.487574   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:46.487584   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:49.006261   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:54.008733   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:54.008827   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:54.021379   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:54.021470   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:54.036415   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:54.036625   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:54.051061   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:54.051142   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:54.062892   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:54.062980   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:54.075064   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:54.075135   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:54.086940   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:54.087024   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:54.099601   17996 logs.go:276] 0 containers: []
	W0819 04:32:54.099614   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:54.099679   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:54.111454   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:54.111474   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:54.111481   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:54.127269   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:54.127285   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:54.147225   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:54.147238   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:54.163398   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:54.163411   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:54.176161   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:54.176178   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:54.221508   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:54.221528   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:54.229102   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:54.229189   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:54.273539   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:54.273552   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:54.287096   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:54.287109   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:54.302903   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:54.302914   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:54.316266   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:54.316278   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:54.330544   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:54.330556   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:54.351979   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:54.351999   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:54.365064   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:54.365076   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:54.391357   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:54.391385   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:54.413435   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:54.413449   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:54.426882   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:54.426895   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:56.951960   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:01.954045   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:01.954177   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:33:01.965656   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:33:01.965731   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:33:01.977427   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:33:01.977504   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:33:01.988248   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:33:01.988321   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:33:01.998897   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:33:01.998967   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:33:02.009583   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:33:02.009654   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:33:02.020954   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:33:02.021025   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:33:02.031579   17996 logs.go:276] 0 containers: []
	W0819 04:33:02.031592   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:33:02.031648   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:33:02.042573   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:33:02.042593   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:33:02.042598   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:33:02.084428   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:33:02.084448   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:33:02.099271   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:33:02.099284   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:33:02.116053   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:33:02.116065   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:33:02.130698   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:33:02.130711   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:33:02.142758   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:33:02.142770   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:33:02.160981   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:33:02.160991   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:33:02.166091   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:33:02.166098   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:33:02.178233   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:33:02.178244   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:33:02.202777   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:33:02.202789   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:33:02.217474   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:33:02.217487   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:33:02.242262   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:33:02.242276   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:33:02.255121   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:33:02.255133   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:33:02.267057   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:33:02.267070   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:33:02.279300   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:33:02.279311   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:33:02.321120   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:33:02.321132   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:33:02.339258   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:33:02.339271   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:33:04.854619   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:09.856763   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:09.856931   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:33:09.868887   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:33:09.868967   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:33:09.879866   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:33:09.879942   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:33:09.890405   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:33:09.890476   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:33:09.901235   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:33:09.901305   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:33:09.912149   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:33:09.912216   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:33:09.923233   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:33:09.923304   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:33:09.935827   17996 logs.go:276] 0 containers: []
	W0819 04:33:09.935838   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:33:09.935896   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:33:09.947079   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:33:09.947099   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:33:09.947105   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:33:09.951384   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:33:09.951393   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:33:09.970950   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:33:09.970961   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:33:09.983489   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:33:09.983502   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:33:09.997587   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:33:09.997595   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:33:10.010190   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:33:10.010199   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:33:10.024792   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:33:10.024805   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:33:10.036974   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:33:10.036985   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:33:10.048369   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:33:10.048380   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:33:10.060291   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:33:10.060305   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:33:10.075378   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:33:10.075389   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:33:10.088411   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:33:10.088422   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:33:10.106070   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:33:10.106080   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:33:10.118157   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:33:10.118169   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:33:10.157767   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:33:10.157782   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:33:10.192517   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:33:10.192532   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:33:10.210403   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:33:10.210415   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:33:12.736240   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:17.736661   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:17.736776   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:33:17.747889   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:33:17.747971   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:33:17.762795   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:33:17.762876   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:33:17.774048   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:33:17.774124   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:33:17.784703   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:33:17.784773   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:33:17.803462   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:33:17.803528   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:33:17.814211   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:33:17.814279   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:33:17.824528   17996 logs.go:276] 0 containers: []
	W0819 04:33:17.824538   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:33:17.824589   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:33:17.834844   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:33:17.834864   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:33:17.834869   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:33:17.849746   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:33:17.849758   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:33:17.861742   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:33:17.861753   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:33:17.873728   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:33:17.873740   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:33:17.896216   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:33:17.896235   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:33:17.932760   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:33:17.932771   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:33:17.947113   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:33:17.947125   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:33:17.961282   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:33:17.961292   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:33:17.977027   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:33:17.977041   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:33:18.017825   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:33:18.017836   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:33:18.022290   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:33:18.022299   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:33:18.033120   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:33:18.033132   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:33:18.048178   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:33:18.048188   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:33:18.065941   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:33:18.065952   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:33:18.078497   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:33:18.078509   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:33:18.098456   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:33:18.098466   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:33:18.116355   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:33:18.116368   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:33:20.629968   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:25.632372   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:25.632448   17996 kubeadm.go:597] duration metric: took 4m4.468888041s to restartPrimaryControlPlane
	W0819 04:33:25.632511   17996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:33:25.632542   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:33:26.602323   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:33:26.607271   17996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:33:26.609970   17996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:33:26.612788   17996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:33:26.612795   17996 kubeadm.go:157] found existing configuration files:
	
	I0819 04:33:26.612823   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/admin.conf
	I0819 04:33:26.615541   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:33:26.615564   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:33:26.617956   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/kubelet.conf
	I0819 04:33:26.620918   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:33:26.620942   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:33:26.624151   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/controller-manager.conf
	I0819 04:33:26.626585   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:33:26.626606   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:33:26.629395   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/scheduler.conf
	I0819 04:33:26.632655   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:33:26.632685   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:33:26.635773   17996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:33:26.654580   17996 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:33:26.654674   17996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:33:26.701581   17996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:33:26.701664   17996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:33:26.701720   17996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:33:26.751598   17996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:33:26.755698   17996 out.go:235]   - Generating certificates and keys ...
	I0819 04:33:26.755732   17996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:33:26.755768   17996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:33:26.755825   17996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:33:26.755863   17996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:33:26.755893   17996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:33:26.755922   17996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:33:26.755952   17996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:33:26.756033   17996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:33:26.756147   17996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:33:26.756190   17996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:33:26.756211   17996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:33:26.756238   17996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:33:26.873183   17996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:33:26.900590   17996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:33:27.008210   17996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:33:27.271475   17996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:33:27.302374   17996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:33:27.303644   17996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:33:27.303681   17996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:33:27.392181   17996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:33:27.395385   17996 out.go:235]   - Booting up control plane ...
	I0819 04:33:27.395453   17996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:33:27.395498   17996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:33:27.395592   17996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:33:27.395678   17996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:33:27.395906   17996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:33:31.899066   17996 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504303 seconds
	I0819 04:33:31.899128   17996 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:33:31.902659   17996 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:33:32.425951   17996 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:33:32.426347   17996 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-038000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:33:32.929932   17996 kubeadm.go:310] [bootstrap-token] Using token: u07p32.ydjqlodqa5aupx7g
	I0819 04:33:32.935342   17996 out.go:235]   - Configuring RBAC rules ...
	I0819 04:33:32.935398   17996 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:33:32.935468   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:33:32.937154   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:33:32.938896   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:33:32.939700   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:33:32.940532   17996 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:33:32.943600   17996 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:33:33.101742   17996 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:33:33.334158   17996 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:33:33.334657   17996 kubeadm.go:310] 
	I0819 04:33:33.334703   17996 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:33:33.334708   17996 kubeadm.go:310] 
	I0819 04:33:33.334775   17996 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:33:33.334780   17996 kubeadm.go:310] 
	I0819 04:33:33.334803   17996 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:33:33.334833   17996 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:33:33.334897   17996 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:33:33.334904   17996 kubeadm.go:310] 
	I0819 04:33:33.334939   17996 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:33:33.334944   17996 kubeadm.go:310] 
	I0819 04:33:33.335041   17996 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:33:33.335046   17996 kubeadm.go:310] 
	I0819 04:33:33.335077   17996 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:33:33.335126   17996 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:33:33.335189   17996 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:33:33.335192   17996 kubeadm.go:310] 
	I0819 04:33:33.335227   17996 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:33:33.335260   17996 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:33:33.335291   17996 kubeadm.go:310] 
	I0819 04:33:33.335332   17996 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u07p32.ydjqlodqa5aupx7g \
	I0819 04:33:33.335390   17996 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 \
	I0819 04:33:33.335406   17996 kubeadm.go:310] 	--control-plane 
	I0819 04:33:33.335410   17996 kubeadm.go:310] 
	I0819 04:33:33.335458   17996 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:33:33.335463   17996 kubeadm.go:310] 
	I0819 04:33:33.335497   17996 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u07p32.ydjqlodqa5aupx7g \
	I0819 04:33:33.335561   17996 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 
	I0819 04:33:33.335616   17996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:33:33.335630   17996 cni.go:84] Creating CNI manager for ""
	I0819 04:33:33.335640   17996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:33:33.339905   17996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:33:33.343957   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:33:33.346982   17996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:33:33.352240   17996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:33:33.352288   17996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:33:33.352297   17996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-038000 minikube.k8s.io/updated_at=2024_08_19T04_33_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=running-upgrade-038000 minikube.k8s.io/primary=true
	I0819 04:33:33.386350   17996 kubeadm.go:1113] duration metric: took 34.083875ms to wait for elevateKubeSystemPrivileges
	I0819 04:33:33.392333   17996 ops.go:34] apiserver oom_adj: -16
	I0819 04:33:33.392345   17996 kubeadm.go:394] duration metric: took 4m12.243566541s to StartCluster
	I0819 04:33:33.392357   17996 settings.go:142] acquiring lock: {Name:mk0efade08e7fded56aa74c9b61139ee991f6648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:33:33.392506   17996 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:33:33.392896   17996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:33:33.393101   17996 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:33:33.393161   17996 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:33:33.393195   17996 config.go:182] Loaded profile config "running-upgrade-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:33:33.393198   17996 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-038000"
	I0819 04:33:33.393209   17996 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-038000"
	W0819 04:33:33.393212   17996 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:33:33.393223   17996 host.go:66] Checking if "running-upgrade-038000" exists ...
	I0819 04:33:33.393220   17996 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-038000"
	I0819 04:33:33.393291   17996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-038000"
	I0819 04:33:33.395873   17996 out.go:177] * Verifying Kubernetes components...
	I0819 04:33:33.396711   17996 kapi.go:59] client config for running-upgrade-038000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fd9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:33:33.399175   17996 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-038000"
	W0819 04:33:33.399180   17996 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:33:33.399188   17996 host.go:66] Checking if "running-upgrade-038000" exists ...
	I0819 04:33:33.399701   17996 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:33:33.399707   17996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:33:33.399712   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:33:33.402807   17996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:33:33.406860   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:33:33.409915   17996 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:33:33.409922   17996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:33:33.409928   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:33:33.487508   17996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:33:33.492638   17996 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:33:33.492687   17996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:33:33.496791   17996 api_server.go:72] duration metric: took 103.675833ms to wait for apiserver process to appear ...
	I0819 04:33:33.496800   17996 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:33:33.496807   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:33.509061   17996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:33:33.536022   17996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:33:33.850189   17996 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:33:33.850201   17996 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:33:38.498803   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:38.498838   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:43.499049   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:43.499069   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:48.499273   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:48.499292   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:53.499564   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:53.499627   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:58.500031   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:58.500056   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:03.500626   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:03.500672   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:34:03.851951   17996 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:34:03.856669   17996 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:34:03.863524   17996 addons.go:510] duration metric: took 30.471051833s for enable addons: enabled=[storage-provisioner]
	I0819 04:34:08.501449   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:08.501489   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:13.502472   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:13.502521   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:18.503807   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:18.503859   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:23.505556   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:23.505578   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:28.507672   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:28.507711   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:33.509847   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:33.509961   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:33.523746   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:33.523822   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:33.535466   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:33.535535   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:33.548809   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:33.548885   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:33.559179   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:33.559248   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:33.569276   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:33.569345   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:33.579784   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:33.579852   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:33.594440   17996 logs.go:276] 0 containers: []
	W0819 04:34:33.594454   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:33.594517   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:33.605215   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:33.605230   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:33.605236   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:33.641169   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:33.641180   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:33.655384   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:33.655396   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:33.669385   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:33.669396   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:33.681619   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:33.681630   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:33.692713   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:33.692725   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:33.716271   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:33.716281   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:33.749411   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:33.749419   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:33.753804   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:33.753810   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:33.765112   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:33.765122   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:33.777384   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:33.777397   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:33.794805   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:33.794821   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:33.814534   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:33.814547   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:36.328572   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:41.330771   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:41.330942   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:41.348599   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:41.348693   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:41.362250   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:41.362327   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:41.373265   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:41.373335   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:41.385269   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:41.385334   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:41.395552   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:41.395632   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:41.410646   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:41.410709   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:41.421085   17996 logs.go:276] 0 containers: []
	W0819 04:34:41.421097   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:41.421151   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:41.431696   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:41.431714   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:41.431720   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:41.445922   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:41.445935   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:41.459954   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:41.459963   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:41.495702   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:41.495713   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:41.500433   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:41.500441   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:41.536792   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:41.536804   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:41.549435   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:41.549447   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:41.572378   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:41.572389   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:41.590600   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:41.590614   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:41.615633   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:41.615645   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:41.627497   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:41.627507   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:41.641690   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:41.641703   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:41.655821   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:41.655834   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:44.175608   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:49.178099   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:49.178413   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:49.216894   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:49.216990   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:49.232003   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:49.232093   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:49.244642   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:49.244729   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:49.256406   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:49.256474   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:49.268688   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:49.268760   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:49.281051   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:49.281122   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:49.292076   17996 logs.go:276] 0 containers: []
	W0819 04:34:49.292087   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:49.292141   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:49.303199   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:49.303215   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:49.303221   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:49.336698   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:49.336708   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:49.341159   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:49.341168   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:49.376614   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:49.376631   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:49.391721   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:49.391734   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:49.403643   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:49.403658   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:49.418160   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:49.418174   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:49.443262   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:49.443277   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:49.457474   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:49.457487   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:49.468881   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:49.468895   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:49.483593   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:49.483605   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:49.502226   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:49.502235   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:49.514675   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:49.514688   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:52.028879   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:57.031497   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:57.031944   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:57.068725   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:57.068872   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:57.090465   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:57.090581   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:57.105607   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:57.105688   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:57.117939   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:57.118003   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:57.129238   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:57.129311   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:57.139752   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:57.139820   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:57.150333   17996 logs.go:276] 0 containers: []
	W0819 04:34:57.150344   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:57.150405   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:57.161340   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:57.161357   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:57.161363   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:57.165803   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:57.165811   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:57.185870   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:57.185883   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:57.198437   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:57.198448   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:57.213526   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:57.213542   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:57.246586   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:57.246594   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:57.288769   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:57.288787   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:57.302692   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:57.302707   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:57.314352   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:57.314363   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:57.326787   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:57.326799   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:57.344266   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:57.344278   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:57.355623   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:57.355634   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:57.380607   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:57.380616   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:59.894219   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:04.894664   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:04.894907   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:04.918671   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:04.918774   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:04.936050   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:04.936126   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:04.952848   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:04.952942   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:04.963660   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:04.963729   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:04.974089   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:04.974158   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:04.984546   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:04.984616   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:04.998715   17996 logs.go:276] 0 containers: []
	W0819 04:35:04.998726   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:04.998780   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:05.009317   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:05.009332   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:05.009340   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:05.021045   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:05.021056   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:05.033067   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:05.033080   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:05.045364   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:05.045377   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:05.062312   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:05.062322   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:05.087518   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:05.087530   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:05.099281   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:05.099294   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:05.143019   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:05.143033   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:05.148111   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:05.148119   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:05.162834   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:05.162848   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:05.178677   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:05.178688   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:05.197716   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:05.197727   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:05.215211   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:05.215224   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:07.750856   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:12.751948   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:12.752064   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:12.765697   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:12.765763   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:12.777846   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:12.777920   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:12.788299   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:12.788378   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:12.799496   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:12.799571   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:12.809932   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:12.810005   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:12.822472   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:12.822541   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:12.832638   17996 logs.go:276] 0 containers: []
	W0819 04:35:12.832649   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:12.832706   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:12.843352   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:12.843367   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:12.843372   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:12.855625   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:12.855636   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:12.880613   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:12.880621   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:12.892064   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:12.892076   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:12.927540   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:12.927548   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:12.932652   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:12.932659   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:12.947525   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:12.947539   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:12.961473   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:12.961486   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:12.973387   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:12.973401   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:13.008103   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:13.008118   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:13.019843   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:13.019855   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:13.034541   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:13.034554   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:13.046182   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:13.046192   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:15.565769   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:20.566216   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:20.566448   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:20.595653   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:20.595764   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:20.613319   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:20.613401   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:20.628309   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:20.628386   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:20.640230   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:20.640302   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:20.654711   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:20.654782   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:20.670051   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:20.670119   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:20.680403   17996 logs.go:276] 0 containers: []
	W0819 04:35:20.680415   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:20.680474   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:20.691458   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:20.691475   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:20.691481   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:20.726415   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:20.726426   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:20.762103   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:20.762116   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:20.776583   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:20.776596   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:20.788856   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:20.788866   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:20.801993   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:20.802005   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:20.816908   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:20.816919   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:20.830312   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:20.830325   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:20.855408   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:20.855415   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:20.867113   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:20.867124   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:20.872094   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:20.872100   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:20.886421   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:20.886431   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:20.906328   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:20.906343   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:23.420194   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:28.422375   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:28.422545   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:28.440511   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:28.440600   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:28.454004   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:28.454082   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:28.464980   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:28.465047   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:28.475380   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:28.475451   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:28.485756   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:28.485825   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:28.496340   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:28.496417   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:28.512180   17996 logs.go:276] 0 containers: []
	W0819 04:35:28.512193   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:28.512256   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:28.523007   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:28.523022   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:28.523027   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:28.557966   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:28.557979   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:28.563407   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:28.563416   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:28.598880   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:28.598890   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:28.613313   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:28.613326   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:28.628465   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:28.628478   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:28.640414   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:28.640428   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:28.663847   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:28.663859   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:28.677673   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:28.677684   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:28.689240   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:28.689251   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:28.700783   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:28.700795   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:28.718430   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:28.718439   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:28.729809   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:28.729822   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:31.243122   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:36.245341   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:36.245592   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:36.263756   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:36.263840   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:36.281832   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:36.281904   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:36.293200   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:36.293274   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:36.304770   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:36.304838   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:36.315355   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:36.315426   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:36.328114   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:36.328183   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:36.342599   17996 logs.go:276] 0 containers: []
	W0819 04:35:36.342610   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:36.342664   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:36.352667   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:36.352682   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:36.352687   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:36.366961   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:36.366973   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:36.378410   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:36.378421   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:36.389514   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:36.389529   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:36.401164   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:36.401178   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:36.418748   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:36.418761   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:36.453849   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:36.453861   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:36.458299   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:36.458309   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:36.522506   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:36.522520   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:36.546958   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:36.546968   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:36.558786   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:36.558800   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:36.573552   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:36.573565   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:36.588740   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:36.588754   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:39.107542   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:44.109806   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:44.110005   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:44.128685   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:44.128780   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:44.142903   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:44.142979   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:44.154929   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:44.155002   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:44.165679   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:44.165739   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:44.176008   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:44.176082   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:44.186609   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:44.186681   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:44.205437   17996 logs.go:276] 0 containers: []
	W0819 04:35:44.205447   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:44.205504   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:44.215966   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:44.215981   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:44.215987   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:44.230180   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:44.230191   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:44.242507   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:44.242519   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:44.254236   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:44.254248   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:44.265310   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:44.265320   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:44.305495   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:44.305506   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:44.319810   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:44.319821   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:44.331995   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:44.332009   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:44.352647   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:44.352659   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:44.370167   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:44.370177   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:44.395000   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:44.395012   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:44.407434   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:44.407446   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:44.444806   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:44.444816   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:46.951578   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:51.953805   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:51.954002   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:51.972092   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:51.972177   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:51.986263   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:51.986338   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:51.997589   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:35:51.997667   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:52.010621   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:52.010688   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:52.021510   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:52.021575   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:52.031975   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:52.032051   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:52.042819   17996 logs.go:276] 0 containers: []
	W0819 04:35:52.042832   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:52.042892   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:52.053209   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:52.053229   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:52.053235   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:52.065101   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:52.065113   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:52.080169   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:52.080184   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:52.084659   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:35:52.084668   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:35:52.096172   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:35:52.096183   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:35:52.107536   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:52.107548   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:52.119593   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:52.119605   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:52.130881   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:52.130892   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:52.164400   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:52.164407   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:52.200089   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:52.200102   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:52.214316   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:52.214330   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:52.228924   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:52.228935   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:52.252976   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:52.252986   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:52.264162   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:52.264175   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:52.277572   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:52.277584   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:54.801222   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:59.803333   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:59.803535   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:59.824583   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:59.824685   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:59.840687   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:59.840771   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:59.853172   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:35:59.853241   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:59.864551   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:59.864622   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:59.875249   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:59.875319   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:59.886274   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:59.886343   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:59.896229   17996 logs.go:276] 0 containers: []
	W0819 04:35:59.896243   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:59.896299   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:59.911375   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:59.911393   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:59.911399   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:59.935215   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:59.935228   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:59.949772   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:35:59.949785   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:35:59.961418   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:59.961429   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:59.973844   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:59.973856   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:59.985941   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:59.985952   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:59.997648   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:59.997659   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:00.036944   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:00.036955   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:00.049170   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:00.049182   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:00.072999   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:00.073010   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:00.107494   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:00.107516   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:00.125334   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:00.125347   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:00.136949   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:00.136964   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:00.149068   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:00.149082   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:00.163990   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:00.164000   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:02.670643   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:07.672866   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:07.672987   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:07.685968   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:07.686044   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:07.697214   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:07.697294   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:07.708092   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:07.708162   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:07.719969   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:07.720043   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:07.731257   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:07.731322   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:07.742105   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:07.742176   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:07.752461   17996 logs.go:276] 0 containers: []
	W0819 04:36:07.752471   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:07.752525   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:07.765471   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:07.765488   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:07.765494   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:07.769939   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:07.769948   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:07.783805   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:07.783814   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:07.795643   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:07.795659   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:07.813109   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:07.813123   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:07.839025   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:07.839042   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:07.850915   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:07.850931   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:07.885559   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:07.885572   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:07.897630   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:07.897642   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:07.912877   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:07.912889   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:07.924569   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:07.924580   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:07.957910   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:07.957920   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:07.971337   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:07.971350   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:07.982395   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:07.982406   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:07.994601   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:07.994615   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:10.507752   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:15.508298   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:15.508465   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:15.523322   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:15.523404   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:15.534191   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:15.534262   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:15.545064   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:15.545138   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:15.555145   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:15.555214   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:15.571206   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:15.571283   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:15.583602   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:15.583676   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:15.598623   17996 logs.go:276] 0 containers: []
	W0819 04:36:15.598640   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:15.598701   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:15.613082   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:15.613102   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:15.613107   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:15.635858   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:15.635868   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:15.648195   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:15.648209   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:15.659827   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:15.659837   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:15.698815   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:15.698830   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:15.703796   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:15.703803   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:15.718325   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:15.718335   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:15.729794   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:15.729805   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:15.754230   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:15.754244   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:15.766432   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:15.766447   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:15.803178   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:15.803194   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:15.814730   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:15.814740   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:15.830091   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:15.830100   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:15.847825   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:15.847841   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:15.861384   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:15.861395   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:18.380755   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:23.382667   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:23.382770   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:23.393640   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:23.393708   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:23.404100   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:23.404168   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:23.415036   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:23.415114   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:23.432364   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:23.432426   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:23.443191   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:23.443258   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:23.454123   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:23.454187   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:23.464538   17996 logs.go:276] 0 containers: []
	W0819 04:36:23.464551   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:23.464613   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:23.476975   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:23.476994   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:23.476999   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:23.481438   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:23.481445   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:23.495872   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:23.495885   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:23.508501   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:23.508513   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:23.533363   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:23.533373   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:23.565658   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:23.565666   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:23.605709   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:23.605720   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:23.620207   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:23.620219   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:23.632818   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:23.632831   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:23.644575   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:23.644589   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:23.658486   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:23.658498   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:23.671213   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:23.671226   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:23.689074   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:23.689086   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:23.700743   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:23.700753   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:23.713102   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:23.713116   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:26.227327   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:31.228275   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:31.228467   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:31.246165   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:31.246258   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:31.260191   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:31.260264   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:31.275132   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:31.275214   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:31.289449   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:31.289520   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:31.299873   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:31.299949   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:31.316874   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:31.316943   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:31.327155   17996 logs.go:276] 0 containers: []
	W0819 04:36:31.327167   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:31.327220   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:31.337830   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:31.337848   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:31.337854   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:31.372715   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:31.372724   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:31.384104   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:31.384115   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:31.409399   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:31.409407   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:31.421126   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:31.421138   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:31.461302   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:31.461316   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:31.501109   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:31.501122   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:31.515900   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:31.515914   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:31.527594   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:31.527607   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:31.539388   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:31.539398   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:31.551184   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:31.551200   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:31.555987   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:31.555993   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:31.570436   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:31.570448   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:31.582856   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:31.582867   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:31.600718   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:31.600731   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:34.117362   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:39.118676   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:39.118855   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:39.133144   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:39.133231   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:39.144787   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:39.144851   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:39.156128   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:39.156205   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:39.172347   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:39.172412   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:39.183031   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:39.183101   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:39.193236   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:39.193301   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:39.204884   17996 logs.go:276] 0 containers: []
	W0819 04:36:39.204896   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:39.204960   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:39.217686   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:39.217703   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:39.217709   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:39.229564   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:39.229577   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:39.241676   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:39.241690   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:39.260655   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:39.260664   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:39.281661   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:39.281674   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:39.286278   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:39.286287   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:39.300600   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:39.300610   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:39.314946   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:39.314959   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:39.330030   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:39.330041   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:39.355239   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:39.355247   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:39.389903   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:39.389915   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:39.401528   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:39.401542   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:39.436792   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:39.436803   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:39.449094   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:39.449107   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:39.461064   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:39.461078   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:41.976699   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:46.978897   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:46.979029   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:46.992476   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:46.992556   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:47.004813   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:47.004880   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:47.015901   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:47.015971   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:47.026339   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:47.026409   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:47.037603   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:47.037667   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:47.049031   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:47.049096   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:47.058993   17996 logs.go:276] 0 containers: []
	W0819 04:36:47.059006   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:47.059067   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:47.080560   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:47.080583   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:47.080589   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:47.085155   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:47.085162   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:47.096805   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:47.096815   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:47.108453   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:47.108466   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:47.123017   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:47.123029   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:47.134615   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:47.134628   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:47.152323   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:47.152338   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:47.163922   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:47.163932   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:47.180611   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:47.180623   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:47.192539   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:47.192555   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:47.208295   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:47.208307   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:47.233132   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:47.233141   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:47.267342   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:47.267355   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:47.305743   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:47.305756   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:47.317298   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:47.317310   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:49.834890   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:54.837140   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:54.837234   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:54.848286   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:54.848372   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:54.861188   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:54.861256   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:54.871608   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:54.871683   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:54.882667   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:54.882735   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:54.896384   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:54.896459   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:54.907582   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:54.907654   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:54.917848   17996 logs.go:276] 0 containers: []
	W0819 04:36:54.917859   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:54.917914   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:54.928529   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:54.928547   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:54.928552   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:54.939777   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:54.939788   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:54.953466   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:54.953478   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:54.967441   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:54.967456   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:54.978970   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:54.978980   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:54.990725   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:54.990739   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:55.014368   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:55.014379   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:55.050907   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:55.050920   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:55.062902   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:55.062915   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:55.074551   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:55.074564   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:55.109609   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:55.109623   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:55.121697   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:55.121709   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:55.136374   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:55.136388   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:55.154378   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:55.154387   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:55.166286   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:55.166298   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:57.673556   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:02.674602   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:02.674711   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:02.686571   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:02.686641   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:02.705203   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:02.705281   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:02.717481   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:02.717562   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:02.729390   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:02.729464   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:02.740420   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:02.740493   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:02.752609   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:02.752678   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:02.765191   17996 logs.go:276] 0 containers: []
	W0819 04:37:02.765204   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:02.765262   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:02.777261   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:02.777280   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:02.777288   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:02.814400   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:02.814420   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:02.830464   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:02.830477   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:02.852083   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:02.852094   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:02.865081   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:02.865093   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:02.884799   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:02.884817   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:02.910364   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:02.910376   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:02.922577   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:02.922592   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:02.927381   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:02.927390   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:02.942123   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:02.942136   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:02.955025   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:02.955036   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:02.968208   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:02.968220   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:02.983897   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:02.983912   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:02.997928   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:02.997941   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:03.036399   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:03.036412   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:05.553358   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:10.553574   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:10.553641   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:10.579563   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:10.579636   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:10.607928   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:10.608003   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:10.620002   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:10.620074   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:10.630865   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:10.630933   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:10.641486   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:10.641556   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:10.652967   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:10.653034   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:10.671455   17996 logs.go:276] 0 containers: []
	W0819 04:37:10.671464   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:10.671509   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:10.681645   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:10.681663   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:10.681668   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:10.693256   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:10.693265   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:10.710959   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:10.710971   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:10.723349   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:10.723361   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:10.736385   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:10.736398   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:10.749006   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:10.749022   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:10.765802   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:10.765815   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:10.792188   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:10.792208   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:10.797524   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:10.797535   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:10.840458   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:10.840473   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:10.857358   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:10.857372   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:10.869643   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:10.869654   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:10.904293   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:10.904309   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:10.919561   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:10.919579   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:10.934340   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:10.934355   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:13.454595   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:18.456858   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:18.457261   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:18.489185   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:18.489320   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:18.509043   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:18.509130   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:18.524059   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:18.524144   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:18.536454   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:18.536527   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:18.548106   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:18.548170   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:18.558919   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:18.558979   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:18.579873   17996 logs.go:276] 0 containers: []
	W0819 04:37:18.579885   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:18.579950   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:18.599808   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:18.599828   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:18.599833   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:18.613108   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:18.613121   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:18.625603   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:18.625615   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:18.661172   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:18.661180   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:18.697458   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:18.697469   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:18.710318   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:18.710329   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:18.722889   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:18.722901   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:18.727684   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:18.727692   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:18.739608   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:18.739619   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:18.762421   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:18.762430   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:18.780340   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:18.780350   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:18.794055   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:18.794066   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:18.805783   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:18.805792   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:18.823515   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:18.823526   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:18.837789   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:18.837802   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:21.350701   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:26.352929   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:26.353203   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:26.371980   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:26.372069   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:26.385859   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:26.385936   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:26.397668   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:26.397740   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:26.412957   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:26.413028   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:26.423939   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:26.424013   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:26.434819   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:26.434889   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:26.445348   17996 logs.go:276] 0 containers: []
	W0819 04:37:26.445360   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:26.445426   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:26.456472   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:26.456491   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:26.456496   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:26.471189   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:26.471202   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:26.483036   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:26.483048   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:26.495261   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:26.495272   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:26.507032   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:26.507044   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:26.519290   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:26.519301   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:26.534108   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:26.534119   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:26.569499   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:26.569511   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:26.575366   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:26.575375   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:26.610943   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:26.610954   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:26.622818   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:26.622832   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:26.640658   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:26.640669   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:26.653406   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:26.653418   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:26.668008   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:26.668021   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:26.691906   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:26.691921   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:29.207339   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:34.209614   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:34.213209   17996 out.go:201] 
	W0819 04:37:34.217071   17996 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 04:37:34.217080   17996 out.go:270] * 
	* 
	W0819 04:37:34.217680   17996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:37:34.228995   17996 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-038000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-19 04:37:34.323285 -0700 PDT m=+1302.651462126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-038000 -n running-upgrade-038000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-038000 -n running-upgrade-038000: exit status 2 (15.729878333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-038000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-788000          | force-systemd-flag-788000 | jenkins | v1.33.1 | 19 Aug 24 04:27 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-510000              | force-systemd-env-510000  | jenkins | v1.33.1 | 19 Aug 24 04:27 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-510000           | force-systemd-env-510000  | jenkins | v1.33.1 | 19 Aug 24 04:27 PDT | 19 Aug 24 04:27 PDT |
	| start   | -p docker-flags-007000                | docker-flags-007000       | jenkins | v1.33.1 | 19 Aug 24 04:27 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-788000             | force-systemd-flag-788000 | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-788000          | force-systemd-flag-788000 | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT | 19 Aug 24 04:28 PDT |
	| start   | -p cert-expiration-979000             | cert-expiration-979000    | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-007000 ssh               | docker-flags-007000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-007000 ssh               | docker-flags-007000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-007000                | docker-flags-007000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT | 19 Aug 24 04:28 PDT |
	| start   | -p cert-options-427000                | cert-options-427000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-427000 ssh               | cert-options-427000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-427000 -- sudo        | cert-options-427000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-427000                | cert-options-427000       | jenkins | v1.33.1 | 19 Aug 24 04:28 PDT | 19 Aug 24 04:28 PDT |
	| start   | -p running-upgrade-038000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 04:28 PDT | 19 Aug 24 04:29 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-038000             | running-upgrade-038000    | jenkins | v1.33.1 | 19 Aug 24 04:29 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-979000             | cert-expiration-979000    | jenkins | v1.33.1 | 19 Aug 24 04:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-979000             | cert-expiration-979000    | jenkins | v1.33.1 | 19 Aug 24 04:31 PDT | 19 Aug 24 04:31 PDT |
	| start   | -p kubernetes-upgrade-241000          | kubernetes-upgrade-241000 | jenkins | v1.33.1 | 19 Aug 24 04:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-241000          | kubernetes-upgrade-241000 | jenkins | v1.33.1 | 19 Aug 24 04:31 PDT | 19 Aug 24 04:31 PDT |
	| start   | -p kubernetes-upgrade-241000          | kubernetes-upgrade-241000 | jenkins | v1.33.1 | 19 Aug 24 04:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-241000          | kubernetes-upgrade-241000 | jenkins | v1.33.1 | 19 Aug 24 04:31 PDT | 19 Aug 24 04:31 PDT |
	| start   | -p stopped-upgrade-783000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 04:31 PDT | 19 Aug 24 04:32 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-783000 stop           | minikube                  | jenkins | v1.26.0 | 19 Aug 24 04:32 PDT | 19 Aug 24 04:32 PDT |
	| start   | -p stopped-upgrade-783000             | stopped-upgrade-783000    | jenkins | v1.33.1 | 19 Aug 24 04:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 04:32:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 04:32:28.762322   18442 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:32:28.762472   18442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:32:28.762478   18442 out.go:358] Setting ErrFile to fd 2...
	I0819 04:32:28.762481   18442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:32:28.762660   18442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:32:28.763974   18442 out.go:352] Setting JSON to false
	I0819 04:32:28.783445   18442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9116,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:32:28.783525   18442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:32:28.788930   18442 out.go:177] * [stopped-upgrade-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:32:28.796869   18442 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:32:28.796900   18442 notify.go:220] Checking for updates...
	I0819 04:32:28.804817   18442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:32:28.807865   18442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:32:28.810950   18442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:32:28.813900   18442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:32:28.816906   18442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:32:28.820131   18442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:32:28.823830   18442 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:32:28.826876   18442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:32:28.830792   18442 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:32:28.837855   18442 start.go:297] selected driver: qemu2
	I0819 04:32:28.837860   18442 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:32:28.837911   18442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:32:28.840668   18442 cni.go:84] Creating CNI manager for ""
	I0819 04:32:28.840689   18442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:32:28.840723   18442 start.go:340] cluster config:
	{Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:32:28.840779   18442 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:32:28.847862   18442 out.go:177] * Starting "stopped-upgrade-783000" primary control-plane node in "stopped-upgrade-783000" cluster
	I0819 04:32:28.851684   18442 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:32:28.851699   18442 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 04:32:28.851705   18442 cache.go:56] Caching tarball of preloaded images
	I0819 04:32:28.851758   18442 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:32:28.851772   18442 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 04:32:28.851826   18442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/config.json ...
	I0819 04:32:28.852255   18442 start.go:360] acquireMachinesLock for stopped-upgrade-783000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:32:28.852288   18442 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "stopped-upgrade-783000"
	I0819 04:32:28.852298   18442 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:32:28.852303   18442 fix.go:54] fixHost starting: 
	I0819 04:32:28.852414   18442 fix.go:112] recreateIfNeeded on stopped-upgrade-783000: state=Stopped err=<nil>
	W0819 04:32:28.852422   18442 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:32:28.856937   18442 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-783000" ...
	I0819 04:32:30.190031   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:30.190434   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:30.227107   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:30.227238   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:30.252555   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:30.252644   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:30.266809   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:30.266884   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:30.279412   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:30.279496   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:30.290640   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:30.290707   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:30.301650   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:30.301719   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:30.312552   17996 logs.go:276] 0 containers: []
	W0819 04:32:30.312563   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:30.312612   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:30.323268   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:30.323295   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:30.323306   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:30.336291   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:30.336303   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:30.375820   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:30.375833   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:30.396198   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:30.396211   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:30.408825   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:30.408839   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:30.434340   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:30.434349   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:30.451492   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:30.451505   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:30.468150   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:30.468163   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:30.486093   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:30.486104   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:30.497403   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:30.497415   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:28.864859   18442 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:32:28.864954   18442 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53385-:22,hostfwd=tcp::53386-:2376,hostname=stopped-upgrade-783000 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/disk.qcow2
	I0819 04:32:28.911214   18442 main.go:141] libmachine: STDOUT: 
	I0819 04:32:28.911257   18442 main.go:141] libmachine: STDERR: 
	I0819 04:32:28.911264   18442 main.go:141] libmachine: Waiting for VM to start (ssh -p 53385 docker@127.0.0.1)...
	I0819 04:32:30.510675   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:30.510688   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:30.515076   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:30.515084   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:30.550454   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:30.550467   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:30.565311   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:30.565320   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:30.576856   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:30.576868   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:30.588084   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:30.588092   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:30.602432   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:30.602442   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:33.126078   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:38.128671   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:38.128874   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:38.148732   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:38.148810   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:38.159718   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:38.159802   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:38.170851   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:38.170928   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:38.185752   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:38.185816   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:38.196935   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:38.197004   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:38.207830   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:38.207899   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:38.217769   17996 logs.go:276] 0 containers: []
	W0819 04:32:38.217780   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:38.217835   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:38.228420   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:38.228438   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:38.228445   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:38.246908   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:38.246919   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:38.259071   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:38.259085   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:38.271781   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:38.271791   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:38.296219   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:38.296226   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:38.331514   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:38.331526   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:38.353982   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:38.353996   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:38.375361   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:38.375373   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:38.389750   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:38.389764   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:38.402322   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:38.402334   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:38.442449   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:38.442459   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:38.456571   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:38.456584   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:38.473527   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:38.473538   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:38.478784   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:38.478793   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:38.491196   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:38.491207   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:38.503039   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:38.503050   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:38.523586   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:38.523598   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:41.039804   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:46.042125   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:46.042597   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:46.082690   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:46.082832   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:46.104223   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:46.104324   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:46.121217   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:46.121298   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:46.133927   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:46.134006   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:46.144311   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:46.144375   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:46.158738   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:46.158810   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:46.169437   17996 logs.go:276] 0 containers: []
	W0819 04:32:46.169447   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:46.169501   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:46.179632   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:46.179651   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:46.179656   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:46.218666   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:46.218685   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:46.256649   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:46.256662   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:46.270659   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:46.270668   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:46.294415   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:46.294423   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:46.306256   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:46.306269   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:46.320928   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:46.320943   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:46.332734   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:46.332748   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:46.343754   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:46.343764   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:46.367898   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:46.367908   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:46.384746   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:46.384758   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:46.403574   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:46.403587   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:46.415949   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:46.415960   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:46.420220   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:46.420230   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:46.452043   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:46.452060   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:46.467482   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:46.467492   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:46.487574   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:46.487584   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:49.006261   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:49.091104   18442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/config.json ...
	I0819 04:32:49.091952   18442 machine.go:93] provisionDockerMachine start ...
	I0819 04:32:49.092144   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.092746   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.092762   18442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 04:32:49.183271   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 04:32:49.183307   18442 buildroot.go:166] provisioning hostname "stopped-upgrade-783000"
	I0819 04:32:49.183444   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.183638   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.183646   18442 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-783000 && echo "stopped-upgrade-783000" | sudo tee /etc/hostname
	I0819 04:32:49.262915   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-783000
	
	I0819 04:32:49.262978   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.263113   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.263122   18442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-783000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-783000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-783000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 04:32:49.333341   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:32:49.333354   18442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19479-15750/.minikube CaCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19479-15750/.minikube}
	I0819 04:32:49.333372   18442 buildroot.go:174] setting up certificates
	I0819 04:32:49.333381   18442 provision.go:84] configureAuth start
	I0819 04:32:49.333386   18442 provision.go:143] copyHostCerts
	I0819 04:32:49.333473   18442 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem, removing ...
	I0819 04:32:49.333484   18442 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem
	I0819 04:32:49.333601   18442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem (1082 bytes)
	I0819 04:32:49.333807   18442 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem, removing ...
	I0819 04:32:49.333812   18442 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem
	I0819 04:32:49.333876   18442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem (1123 bytes)
	I0819 04:32:49.334009   18442 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem, removing ...
	I0819 04:32:49.334014   18442 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem
	I0819 04:32:49.334068   18442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem (1675 bytes)
	I0819 04:32:49.334169   18442 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-783000 san=[127.0.0.1 localhost minikube stopped-upgrade-783000]
	I0819 04:32:49.521562   18442 provision.go:177] copyRemoteCerts
	I0819 04:32:49.521617   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 04:32:49.521630   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:32:49.557389   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 04:32:49.564590   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 04:32:49.571847   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 04:32:49.578556   18442 provision.go:87] duration metric: took 245.176ms to configureAuth
	I0819 04:32:49.578565   18442 buildroot.go:189] setting minikube options for container-runtime
	I0819 04:32:49.578689   18442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:32:49.578732   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.578827   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.578831   18442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 04:32:49.646485   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 04:32:49.646499   18442 buildroot.go:70] root file system type: tmpfs
	I0819 04:32:49.646547   18442 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 04:32:49.646610   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.646733   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.646767   18442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 04:32:49.713775   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 04:32:49.713820   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.713928   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.713936   18442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 04:32:50.089776   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 04:32:50.089790   18442 machine.go:96] duration metric: took 997.849584ms to provisionDockerMachine
	I0819 04:32:50.089797   18442 start.go:293] postStartSetup for "stopped-upgrade-783000" (driver="qemu2")
	I0819 04:32:50.089803   18442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 04:32:50.089873   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 04:32:50.089883   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:32:50.124418   18442 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 04:32:50.125649   18442 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 04:32:50.125658   18442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19479-15750/.minikube/addons for local assets ...
	I0819 04:32:50.125744   18442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19479-15750/.minikube/files for local assets ...
	I0819 04:32:50.125879   18442 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem -> 162402.pem in /etc/ssl/certs
	I0819 04:32:50.126012   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 04:32:50.128876   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem --> /etc/ssl/certs/162402.pem (1708 bytes)
	I0819 04:32:50.135382   18442 start.go:296] duration metric: took 45.58175ms for postStartSetup
	I0819 04:32:50.135394   18442 fix.go:56] duration metric: took 21.283575125s for fixHost
	I0819 04:32:50.135428   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:50.135526   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:50.135535   18442 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 04:32:50.202164   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067170.492677546
	
	I0819 04:32:50.202173   18442 fix.go:216] guest clock: 1724067170.492677546
	I0819 04:32:50.202178   18442 fix.go:229] Guest: 2024-08-19 04:32:50.492677546 -0700 PDT Remote: 2024-08-19 04:32:50.135396 -0700 PDT m=+21.407565126 (delta=357.281546ms)
	I0819 04:32:50.202191   18442 fix.go:200] guest clock delta is within tolerance: 357.281546ms
	I0819 04:32:50.202194   18442 start.go:83] releasing machines lock for "stopped-upgrade-783000", held for 21.350385916s
	I0819 04:32:50.202263   18442 ssh_runner.go:195] Run: cat /version.json
	I0819 04:32:50.202274   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:32:50.202263   18442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 04:32:50.202315   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	W0819 04:32:50.202914   18442 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53385: connect: connection refused
	I0819 04:32:50.202936   18442 retry.go:31] will retry after 301.263493ms: dial tcp [::1]:53385: connect: connection refused
	W0819 04:32:50.561695   18442 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 04:32:50.561902   18442 ssh_runner.go:195] Run: systemctl --version
	I0819 04:32:50.566679   18442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 04:32:50.570593   18442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 04:32:50.570650   18442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 04:32:50.577241   18442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 04:32:50.586098   18442 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 04:32:50.586126   18442 start.go:495] detecting cgroup driver to use...
	I0819 04:32:50.586265   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:32:50.597383   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 04:32:50.602124   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 04:32:50.605998   18442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 04:32:50.606032   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 04:32:50.610018   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:32:50.613615   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 04:32:50.617225   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:32:50.620699   18442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 04:32:50.624236   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 04:32:50.627606   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 04:32:50.630371   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 04:32:50.633396   18442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 04:32:50.636657   18442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 04:32:50.639653   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:50.715749   18442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 04:32:50.722875   18442 start.go:495] detecting cgroup driver to use...
	I0819 04:32:50.722953   18442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 04:32:50.727925   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:32:50.733001   18442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 04:32:50.741295   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:32:50.746319   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:32:50.751037   18442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 04:32:50.793185   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:32:50.798128   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:32:50.803214   18442 ssh_runner.go:195] Run: which cri-dockerd
	I0819 04:32:50.804524   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 04:32:50.807363   18442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 04:32:50.812235   18442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 04:32:50.902553   18442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 04:32:50.980197   18442 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 04:32:50.980257   18442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 04:32:50.985670   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:51.063709   18442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:32:52.220291   18442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156593208s)
	I0819 04:32:52.220362   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 04:32:52.224696   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:32:52.229280   18442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 04:32:52.308145   18442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 04:32:52.384708   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:52.463045   18442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 04:32:52.468572   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:32:52.472969   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:52.552146   18442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 04:32:52.589309   18442 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 04:32:52.589388   18442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 04:32:52.591705   18442 start.go:563] Will wait 60s for crictl version
	I0819 04:32:52.591763   18442 ssh_runner.go:195] Run: which crictl
	I0819 04:32:52.593222   18442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 04:32:52.608935   18442 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 04:32:52.609003   18442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:32:52.626157   18442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:32:52.644728   18442 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 04:32:52.644852   18442 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 04:32:52.646294   18442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:32:52.649929   18442 kubeadm.go:883] updating cluster {Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 04:32:52.649974   18442 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:32:52.650015   18442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:32:52.663145   18442 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:32:52.663155   18442 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:32:52.663199   18442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:32:52.666156   18442 ssh_runner.go:195] Run: which lz4
	I0819 04:32:52.667508   18442 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 04:32:52.668785   18442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 04:32:52.668795   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 04:32:53.600917   18442 docker.go:649] duration metric: took 933.466917ms to copy over tarball
	I0819 04:32:53.600974   18442 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 04:32:54.008733   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:32:54.008827   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:32:54.021379   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:32:54.021470   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:32:54.036415   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:32:54.036625   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:32:54.051061   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:32:54.051142   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:32:54.062892   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:32:54.062980   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:32:54.075064   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:32:54.075135   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:32:54.086940   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:32:54.087024   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:32:54.099601   17996 logs.go:276] 0 containers: []
	W0819 04:32:54.099614   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:32:54.099679   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:32:54.111454   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:32:54.111474   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:32:54.111481   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:32:54.127269   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:32:54.127285   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:32:54.147225   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:32:54.147238   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:32:54.163398   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:32:54.163411   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:32:54.176161   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:32:54.176178   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:32:54.221508   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:32:54.221528   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:32:54.229102   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:32:54.229189   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:32:54.273539   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:32:54.273552   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:32:54.287096   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:32:54.287109   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:32:54.302903   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:32:54.302914   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:32:54.316266   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:32:54.316278   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:32:54.330544   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:32:54.330556   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:32:54.351979   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:32:54.351999   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:32:54.365064   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:32:54.365076   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:32:54.391357   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:32:54.391385   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:32:54.413435   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:32:54.413449   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:32:54.426882   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:32:54.426895   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:32:54.774479   18442 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.173515375s)
	I0819 04:32:54.774494   18442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 04:32:54.790059   18442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:32:54.793394   18442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 04:32:54.798748   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:54.877549   18442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:32:56.567891   18442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.690334625s)
	I0819 04:32:56.568014   18442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:32:56.579328   18442 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:32:56.579337   18442 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:32:56.579342   18442 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 04:32:56.583813   18442 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:56.585810   18442 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:56.587815   18442 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:56.587871   18442 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:56.590222   18442 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:56.590267   18442 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:56.591546   18442 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:56.592248   18442 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:56.594792   18442 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:56.594791   18442 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:32:56.594963   18442 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:56.596752   18442 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:56.596839   18442 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:56.597800   18442 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:32:56.597827   18442 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:56.598722   18442 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:57.046007   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:57.049504   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 04:32:57.050792   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:57.051017   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:57.066370   18442 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 04:32:57.066400   18442 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:57.066493   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:57.072207   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:57.085041   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0819 04:32:57.090972   18442 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:32:57.091086   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:57.093260   18442 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 04:32:57.093279   18442 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 04:32:57.093285   18442 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 04:32:57.093295   18442 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:57.093321   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:57.093321   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 04:32:57.093394   18442 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 04:32:57.093407   18442 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:57.093430   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:57.111706   18442 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 04:32:57.111732   18442 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:57.111791   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:57.111866   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 04:32:57.138827   18442 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 04:32:57.138844   18442 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 04:32:57.138851   18442 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:57.138854   18442 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:57.138904   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:57.138904   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:57.139767   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:32:57.139792   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:32:57.139838   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 04:32:57.139870   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 04:32:57.139870   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 04:32:57.154687   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 04:32:57.158483   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:32:57.158493   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 04:32:57.158507   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 04:32:57.158518   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 04:32:57.158545   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 04:32:57.158557   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 04:32:57.158595   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:32:57.178441   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 04:32:57.178473   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0819 04:32:57.190850   18442 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:32:57.190972   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:57.202487   18442 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 04:32:57.202501   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 04:32:57.282977   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 04:32:57.282998   18442 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:32:57.283004   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 04:32:57.283016   18442 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 04:32:57.283034   18442 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:57.283086   18442 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:57.324393   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:32:57.324540   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:32:57.404333   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 04:32:57.404326   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 04:32:57.404372   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 04:32:57.475477   18442 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:32:57.475493   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 04:32:57.784928   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 04:32:57.784952   18442 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 04:32:57.784962   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 04:32:57.918931   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 04:32:57.918973   18442 cache_images.go:92] duration metric: took 1.339655041s to LoadCachedImages
	W0819 04:32:57.919017   18442 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0819 04:32:57.919022   18442 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 04:32:57.919076   18442 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-783000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 04:32:57.919146   18442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 04:32:57.933407   18442 cni.go:84] Creating CNI manager for ""
	I0819 04:32:57.933421   18442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:32:57.933425   18442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 04:32:57.933433   18442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-783000 NodeName:stopped-upgrade-783000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 04:32:57.933498   18442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-783000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 04:32:57.933554   18442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 04:32:57.936714   18442 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 04:32:57.936744   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 04:32:57.939619   18442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 04:32:57.944539   18442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 04:32:57.949555   18442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 04:32:57.954943   18442 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 04:32:57.956182   18442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:32:57.959862   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:58.036406   18442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:32:58.042574   18442 certs.go:68] Setting up /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000 for IP: 10.0.2.15
	I0819 04:32:58.042584   18442 certs.go:194] generating shared ca certs ...
	I0819 04:32:58.042593   18442 certs.go:226] acquiring lock for ca certs: {Name:mk35a9cd01f436a7a54821e5f775d6ab16b5867a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.042769   18442 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.key
	I0819 04:32:58.042822   18442 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.key
	I0819 04:32:58.042827   18442 certs.go:256] generating profile certs ...
	I0819 04:32:58.042922   18442 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.key
	I0819 04:32:58.042962   18442 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25
	I0819 04:32:58.042974   18442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 04:32:58.229792   18442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25 ...
	I0819 04:32:58.229805   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25: {Name:mk2fee211061dd1b14760780f701508148afe02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.230885   18442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25 ...
	I0819 04:32:58.230895   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25: {Name:mkd2735e8538c030d9a2b9c87f6dcf8ff54b0762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.231056   18442 certs.go:381] copying /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt
	I0819 04:32:58.231220   18442 certs.go:385] copying /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key
	I0819 04:32:58.231367   18442 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/proxy-client.key
	I0819 04:32:58.231514   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240.pem (1338 bytes)
	W0819 04:32:58.231542   18442 certs.go:480] ignoring /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240_empty.pem, impossibly tiny 0 bytes
	I0819 04:32:58.231550   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 04:32:58.231570   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem (1082 bytes)
	I0819 04:32:58.231593   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem (1123 bytes)
	I0819 04:32:58.231612   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem (1675 bytes)
	I0819 04:32:58.231652   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem (1708 bytes)
	I0819 04:32:58.231999   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 04:32:58.239533   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 04:32:58.246630   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 04:32:58.253185   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 04:32:58.260552   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 04:32:58.268256   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 04:32:58.275134   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 04:32:58.281857   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 04:32:58.289130   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem --> /usr/share/ca-certificates/162402.pem (1708 bytes)
	I0819 04:32:58.296119   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 04:32:58.302851   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240.pem --> /usr/share/ca-certificates/16240.pem (1338 bytes)
	I0819 04:32:58.309323   18442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 04:32:58.314458   18442 ssh_runner.go:195] Run: openssl version
	I0819 04:32:58.316225   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162402.pem && ln -fs /usr/share/ca-certificates/162402.pem /etc/ssl/certs/162402.pem"
	I0819 04:32:58.319124   18442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162402.pem
	I0819 04:32:58.320416   18442 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:16 /usr/share/ca-certificates/162402.pem
	I0819 04:32:58.320439   18442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162402.pem
	I0819 04:32:58.322203   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162402.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 04:32:58.325343   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 04:32:58.328738   18442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:32:58.330546   18442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:32:58.330566   18442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:32:58.332353   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 04:32:58.335676   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16240.pem && ln -fs /usr/share/ca-certificates/16240.pem /etc/ssl/certs/16240.pem"
	I0819 04:32:58.338515   18442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16240.pem
	I0819 04:32:58.339907   18442 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:16 /usr/share/ca-certificates/16240.pem
	I0819 04:32:58.339927   18442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16240.pem
	I0819 04:32:58.341751   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16240.pem /etc/ssl/certs/51391683.0"
	I0819 04:32:58.345095   18442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 04:32:58.346605   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 04:32:58.348601   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 04:32:58.350654   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 04:32:58.352789   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 04:32:58.354887   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 04:32:58.356669   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 04:32:58.358541   18442 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:32:58.358604   18442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:32:58.370481   18442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 04:32:58.374193   18442 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 04:32:58.374204   18442 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 04:32:58.374249   18442 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 04:32:58.377596   18442 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:32:58.377910   18442 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-783000" does not appear in /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:32:58.378009   18442 kubeconfig.go:62] /Users/jenkins/minikube-integration/19479-15750/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-783000" cluster setting kubeconfig missing "stopped-upgrade-783000" context setting]
	I0819 04:32:58.378204   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.378652   18442 kapi.go:59] client config for stopped-upgrade-783000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021bd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:32:58.378988   18442 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 04:32:58.381658   18442 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-783000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 04:32:58.381664   18442 kubeadm.go:1160] stopping kube-system containers ...
	I0819 04:32:58.381703   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:32:58.392171   18442 docker.go:483] Stopping containers: [069748194d02 3985c1d649a7 f269961c577e 390cd57e246c 3a9b46914d25 235331fd2fc2 10fabcb359f6 534015cf45e4]
	I0819 04:32:58.392234   18442 ssh_runner.go:195] Run: docker stop 069748194d02 3985c1d649a7 f269961c577e 390cd57e246c 3a9b46914d25 235331fd2fc2 10fabcb359f6 534015cf45e4
	I0819 04:32:58.402666   18442 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 04:32:58.408232   18442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:32:58.411154   18442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:32:58.411160   18442 kubeadm.go:157] found existing configuration files:
	
	I0819 04:32:58.411184   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf
	I0819 04:32:58.413593   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:32:58.413618   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:32:58.416667   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf
	I0819 04:32:58.419675   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:32:58.419697   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:32:58.422081   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf
	I0819 04:32:58.425018   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:32:58.425043   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:32:58.428229   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf
	I0819 04:32:58.431166   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:32:58.431197   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:32:58.433717   18442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:32:58.436965   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.460269   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:56.951960   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:32:58.826515   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.957907   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.978663   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.997368   18442 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:32:58.997448   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:32:59.499780   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:32:59.997749   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:33:00.002112   18442 api_server.go:72] duration metric: took 1.004767583s to wait for apiserver process to appear ...
	I0819 04:33:00.002124   18442 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:33:00.002137   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:01.954045   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:01.954177   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:33:01.965656   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:33:01.965731   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:33:01.977427   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:33:01.977504   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:33:01.988248   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:33:01.988321   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:33:01.998897   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:33:01.998967   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:33:02.009583   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:33:02.009654   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:33:02.020954   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:33:02.021025   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:33:02.031579   17996 logs.go:276] 0 containers: []
	W0819 04:33:02.031592   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:33:02.031648   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:33:02.042573   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:33:02.042593   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:33:02.042598   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:33:02.084428   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:33:02.084448   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:33:02.099271   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:33:02.099284   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:33:02.116053   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:33:02.116065   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:33:02.130698   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:33:02.130711   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:33:02.142758   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:33:02.142770   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:33:02.160981   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:33:02.160991   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:33:02.166091   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:33:02.166098   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:33:02.178233   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:33:02.178244   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:33:02.202777   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:33:02.202789   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:33:02.217474   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:33:02.217487   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:33:02.242262   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:33:02.242276   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:33:02.255121   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:33:02.255133   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:33:02.267057   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:33:02.267070   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:33:02.279300   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:33:02.279311   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:33:02.321120   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:33:02.321132   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:33:02.339258   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:33:02.339271   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:33:04.854619   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:05.004155   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:05.004215   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:09.856763   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:09.856931   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:33:09.868887   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:33:09.868967   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:33:09.879866   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:33:09.879942   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:33:09.890405   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:33:09.890476   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:33:09.901235   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:33:09.901305   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:33:09.912149   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:33:09.912216   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:33:09.923233   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:33:09.923304   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:33:09.935827   17996 logs.go:276] 0 containers: []
	W0819 04:33:09.935838   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:33:09.935896   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:33:09.947079   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:33:09.947099   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:33:09.947105   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:33:09.951384   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:33:09.951393   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:33:09.970950   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:33:09.970961   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:33:09.983489   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:33:09.983502   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:33:09.997587   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:33:09.997595   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:33:10.010190   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:33:10.010199   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:33:10.024792   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:33:10.024805   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:33:10.036974   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:33:10.036985   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:33:10.048369   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:33:10.048380   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:33:10.060291   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:33:10.060305   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:33:10.075378   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:33:10.075389   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:33:10.088411   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:33:10.088422   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:33:10.106070   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:33:10.106080   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:33:10.118157   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:33:10.118169   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:33:10.157767   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:33:10.157782   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:33:10.192517   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:33:10.192532   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:33:10.210403   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:33:10.210415   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:33:10.004296   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:10.004333   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:12.736240   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:15.004608   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:15.004676   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:17.736661   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:17.736776   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:33:17.747889   17996 logs.go:276] 2 containers: [eca2d24d2934 624b0550691c]
	I0819 04:33:17.747971   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:33:17.762795   17996 logs.go:276] 2 containers: [7bbc43a24759 e4bff7533378]
	I0819 04:33:17.762876   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:33:17.774048   17996 logs.go:276] 1 containers: [0aefd6691079]
	I0819 04:33:17.774124   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:33:17.784703   17996 logs.go:276] 2 containers: [43eabe1b4048 8f857b9db64e]
	I0819 04:33:17.784773   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:33:17.803462   17996 logs.go:276] 1 containers: [b8c8b382d646]
	I0819 04:33:17.803528   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:33:17.814211   17996 logs.go:276] 2 containers: [76c5e8a3e103 a4f14f99ca97]
	I0819 04:33:17.814279   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:33:17.824528   17996 logs.go:276] 0 containers: []
	W0819 04:33:17.824538   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:33:17.824589   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:33:17.834844   17996 logs.go:276] 2 containers: [849a5f11363e 45c03f07359d]
	I0819 04:33:17.834864   17996 logs.go:123] Gathering logs for kube-scheduler [43eabe1b4048] ...
	I0819 04:33:17.834869   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43eabe1b4048"
	I0819 04:33:17.849746   17996 logs.go:123] Gathering logs for storage-provisioner [849a5f11363e] ...
	I0819 04:33:17.849758   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5f11363e"
	I0819 04:33:17.861742   17996 logs.go:123] Gathering logs for storage-provisioner [45c03f07359d] ...
	I0819 04:33:17.861753   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45c03f07359d"
	I0819 04:33:17.873728   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:33:17.873740   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:33:17.896216   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:33:17.896235   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:33:17.932760   17996 logs.go:123] Gathering logs for etcd [7bbc43a24759] ...
	I0819 04:33:17.932771   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7bbc43a24759"
	I0819 04:33:17.947113   17996 logs.go:123] Gathering logs for kube-apiserver [eca2d24d2934] ...
	I0819 04:33:17.947125   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca2d24d2934"
	I0819 04:33:17.961282   17996 logs.go:123] Gathering logs for kube-proxy [b8c8b382d646] ...
	I0819 04:33:17.961292   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8c8b382d646"
	I0819 04:33:17.977027   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:33:17.977041   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:33:18.017825   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:33:18.017836   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:33:18.022290   17996 logs.go:123] Gathering logs for coredns [0aefd6691079] ...
	I0819 04:33:18.022299   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0aefd6691079"
	I0819 04:33:18.033120   17996 logs.go:123] Gathering logs for kube-scheduler [8f857b9db64e] ...
	I0819 04:33:18.033132   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f857b9db64e"
	I0819 04:33:18.048178   17996 logs.go:123] Gathering logs for kube-controller-manager [76c5e8a3e103] ...
	I0819 04:33:18.048188   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76c5e8a3e103"
	I0819 04:33:18.065941   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:33:18.065952   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:33:18.078497   17996 logs.go:123] Gathering logs for kube-apiserver [624b0550691c] ...
	I0819 04:33:18.078509   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 624b0550691c"
	I0819 04:33:18.098456   17996 logs.go:123] Gathering logs for etcd [e4bff7533378] ...
	I0819 04:33:18.098466   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4bff7533378"
	I0819 04:33:18.116355   17996 logs.go:123] Gathering logs for kube-controller-manager [a4f14f99ca97] ...
	I0819 04:33:18.116368   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f14f99ca97"
	I0819 04:33:20.005405   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:20.005462   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:20.629968   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:25.632372   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:25.632448   17996 kubeadm.go:597] duration metric: took 4m4.468888041s to restartPrimaryControlPlane
	W0819 04:33:25.632511   17996 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:33:25.632542   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:33:26.602323   17996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:33:26.607271   17996 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:33:26.609970   17996 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:33:26.612788   17996 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:33:26.612795   17996 kubeadm.go:157] found existing configuration files:
	
	I0819 04:33:26.612823   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/admin.conf
	I0819 04:33:26.615541   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:33:26.615564   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:33:26.617956   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/kubelet.conf
	I0819 04:33:26.620918   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:33:26.620942   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:33:26.624151   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/controller-manager.conf
	I0819 04:33:26.626585   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:33:26.626606   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:33:26.629395   17996 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/scheduler.conf
	I0819 04:33:26.632655   17996 kubeadm.go:163] "https://control-plane.minikube.internal:53188" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53188 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:33:26.632685   17996 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:33:26.635773   17996 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:33:26.654580   17996 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:33:26.654674   17996 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:33:26.701581   17996 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:33:26.701664   17996 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:33:26.701720   17996 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:33:26.751598   17996 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:33:26.755698   17996 out.go:235]   - Generating certificates and keys ...
	I0819 04:33:26.755732   17996 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:33:26.755768   17996 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:33:26.755825   17996 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:33:26.755863   17996 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:33:26.755893   17996 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:33:26.755922   17996 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:33:26.755952   17996 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:33:26.756033   17996 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:33:26.756147   17996 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:33:26.756190   17996 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:33:26.756211   17996 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:33:26.756238   17996 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:33:26.873183   17996 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:33:26.900590   17996 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:33:27.008210   17996 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:33:27.271475   17996 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:33:27.302374   17996 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:33:27.303644   17996 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:33:27.303681   17996 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:33:27.392181   17996 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:33:25.006771   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:25.006862   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:27.395385   17996 out.go:235]   - Booting up control plane ...
	I0819 04:33:27.395453   17996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:33:27.395498   17996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:33:27.395592   17996 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:33:27.395678   17996 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:33:27.395906   17996 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:33:31.899066   17996 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504303 seconds
	I0819 04:33:31.899128   17996 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:33:31.902659   17996 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:33:32.425951   17996 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:33:32.426347   17996 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-038000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:33:32.929932   17996 kubeadm.go:310] [bootstrap-token] Using token: u07p32.ydjqlodqa5aupx7g
	I0819 04:33:32.935342   17996 out.go:235]   - Configuring RBAC rules ...
	I0819 04:33:32.935398   17996 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:33:32.935468   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:33:32.937154   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:33:32.938896   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:33:32.939700   17996 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:33:32.940532   17996 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:33:32.943600   17996 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:33:33.101742   17996 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:33:33.334158   17996 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:33:33.334657   17996 kubeadm.go:310] 
	I0819 04:33:33.334703   17996 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:33:33.334708   17996 kubeadm.go:310] 
	I0819 04:33:33.334775   17996 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:33:33.334780   17996 kubeadm.go:310] 
	I0819 04:33:33.334803   17996 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:33:33.334833   17996 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:33:33.334897   17996 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:33:33.334904   17996 kubeadm.go:310] 
	I0819 04:33:33.334939   17996 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:33:33.334944   17996 kubeadm.go:310] 
	I0819 04:33:33.335041   17996 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:33:33.335046   17996 kubeadm.go:310] 
	I0819 04:33:33.335077   17996 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:33:33.335126   17996 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:33:33.335189   17996 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:33:33.335192   17996 kubeadm.go:310] 
	I0819 04:33:33.335227   17996 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:33:33.335260   17996 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:33:33.335291   17996 kubeadm.go:310] 
	I0819 04:33:33.335332   17996 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u07p32.ydjqlodqa5aupx7g \
	I0819 04:33:33.335390   17996 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 \
	I0819 04:33:33.335406   17996 kubeadm.go:310] 	--control-plane 
	I0819 04:33:33.335410   17996 kubeadm.go:310] 
	I0819 04:33:33.335458   17996 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:33:33.335463   17996 kubeadm.go:310] 
	I0819 04:33:33.335497   17996 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u07p32.ydjqlodqa5aupx7g \
	I0819 04:33:33.335561   17996 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 
	I0819 04:33:33.335616   17996 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:33:33.335630   17996 cni.go:84] Creating CNI manager for ""
	I0819 04:33:33.335640   17996 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:33:33.339905   17996 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:33:33.343957   17996 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:33:33.346982   17996 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:33:33.352240   17996 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:33:33.352288   17996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:33:33.352297   17996 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-038000 minikube.k8s.io/updated_at=2024_08_19T04_33_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=running-upgrade-038000 minikube.k8s.io/primary=true
	I0819 04:33:33.386350   17996 kubeadm.go:1113] duration metric: took 34.083875ms to wait for elevateKubeSystemPrivileges
	I0819 04:33:33.392333   17996 ops.go:34] apiserver oom_adj: -16
	I0819 04:33:33.392345   17996 kubeadm.go:394] duration metric: took 4m12.243566541s to StartCluster
	I0819 04:33:33.392357   17996 settings.go:142] acquiring lock: {Name:mk0efade08e7fded56aa74c9b61139ee991f6648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:33:33.392506   17996 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:33:33.392896   17996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:33:33.393101   17996 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:33:33.393161   17996 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:33:33.393195   17996 config.go:182] Loaded profile config "running-upgrade-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:33:33.393198   17996 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-038000"
	I0819 04:33:33.393209   17996 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-038000"
	W0819 04:33:33.393212   17996 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:33:33.393223   17996 host.go:66] Checking if "running-upgrade-038000" exists ...
	I0819 04:33:33.393220   17996 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-038000"
	I0819 04:33:33.393291   17996 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-038000"
	I0819 04:33:33.395873   17996 out.go:177] * Verifying Kubernetes components...
	I0819 04:33:33.396711   17996 kapi.go:59] client config for running-upgrade-038000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/running-upgrade-038000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103fd9610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:33:33.399175   17996 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-038000"
	W0819 04:33:33.399180   17996 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:33:33.399188   17996 host.go:66] Checking if "running-upgrade-038000" exists ...
	I0819 04:33:33.399701   17996 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:33:33.399707   17996 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:33:33.399712   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:33:33.402807   17996 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:33:30.007956   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:30.007982   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:33.406860   17996 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:33:33.409915   17996 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:33:33.409922   17996 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:33:33.409928   17996 sshutil.go:53] new ssh client: &{IP:localhost Port:53156 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/running-upgrade-038000/id_rsa Username:docker}
	I0819 04:33:33.487508   17996 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:33:33.492638   17996 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:33:33.492687   17996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:33:33.496791   17996 api_server.go:72] duration metric: took 103.675833ms to wait for apiserver process to appear ...
	I0819 04:33:33.496800   17996 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:33:33.496807   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:33.509061   17996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:33:33.536022   17996 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:33:33.850189   17996 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:33:33.850201   17996 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:33:35.009217   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:35.009259   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:38.498803   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:38.498838   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:40.010846   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:40.010889   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:43.499049   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:43.499069   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:45.012969   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:45.013012   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:48.499273   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:48.499292   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:50.015174   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:50.015215   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:53.499564   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:53.499627   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:55.017392   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:55.017439   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:58.500031   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:58.500056   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:00.019639   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:00.019785   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:00.032493   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:00.032594   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:00.043727   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:00.043800   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:00.054208   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:00.054286   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:00.065178   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:00.065258   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:00.075676   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:00.075753   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:00.087173   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:00.087246   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:00.097126   18442 logs.go:276] 0 containers: []
	W0819 04:34:00.097137   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:00.097196   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:00.107623   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:00.107656   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:00.107663   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:00.121454   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:00.121465   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:00.132793   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:00.132805   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:00.144771   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:00.144785   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:00.156564   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:00.156577   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:00.182677   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:00.182686   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:00.186939   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:00.186949   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:00.283857   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:00.283870   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:00.299278   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:00.299289   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:00.316555   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:00.316565   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:00.328426   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:00.328438   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:00.340377   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:00.340391   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:00.377803   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:00.377815   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:00.394919   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:00.394934   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:00.406805   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:00.406819   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:00.421364   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:00.421378   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:00.466537   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:00.466550   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:02.985620   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:03.500626   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:03.500672   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:34:03.851951   17996 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:34:03.856669   17996 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:34:03.863524   17996 addons.go:510] duration metric: took 30.471051833s for enable addons: enabled=[storage-provisioner]
	I0819 04:34:07.987799   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:07.988036   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:08.009161   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:08.009274   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:08.023266   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:08.023341   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:08.036766   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:08.036831   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:08.047240   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:08.047321   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:08.058161   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:08.058230   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:08.068808   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:08.068880   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:08.088252   18442 logs.go:276] 0 containers: []
	W0819 04:34:08.088265   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:08.088332   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:08.104340   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:08.104363   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:08.104369   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:08.117787   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:08.117797   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:08.130694   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:08.130706   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:08.169725   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:08.169737   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:08.181726   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:08.181738   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:08.199110   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:08.199122   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:08.210677   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:08.210687   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:08.234782   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:08.234801   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:08.274737   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:08.274751   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:08.290974   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:08.290986   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:08.310013   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:08.310026   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:08.325332   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:08.325345   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:08.336755   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:08.336767   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:08.341532   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:08.341539   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:08.377164   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:08.377174   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:08.391709   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:08.391721   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:08.406395   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:08.406407   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:08.501449   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:08.501489   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:10.920235   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:13.502472   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:13.502521   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:15.922542   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:15.922686   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:15.935441   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:15.935523   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:15.946701   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:15.946766   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:15.957670   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:15.957737   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:15.968448   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:15.968515   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:15.979365   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:15.979432   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:15.991182   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:15.991254   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:16.002855   18442 logs.go:276] 0 containers: []
	W0819 04:34:16.002866   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:16.002922   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:16.013221   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:16.013239   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:16.013245   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:16.051098   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:16.051109   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:16.065992   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:16.066008   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:16.078207   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:16.078219   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:16.093927   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:16.093939   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:16.105563   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:16.105576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:16.116694   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:16.116707   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:16.128758   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:16.128769   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:16.164691   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:16.164705   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:16.182680   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:16.182692   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:16.196071   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:16.196085   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:16.211563   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:16.211576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:16.226481   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:16.226497   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:16.264555   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:16.264568   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:16.278373   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:16.278387   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:16.296220   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:16.296233   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:16.300749   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:16.300759   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:18.503807   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:18.503859   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:18.826615   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:23.505556   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:23.505578   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:23.828959   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:23.829068   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:23.847257   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:23.847320   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:23.859740   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:23.859799   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:23.869591   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:23.869666   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:23.880306   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:23.880368   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:23.895422   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:23.895497   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:23.905763   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:23.905836   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:23.916010   18442 logs.go:276] 0 containers: []
	W0819 04:34:23.916023   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:23.916083   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:23.926031   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:23.926051   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:23.926056   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:23.950132   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:23.950151   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:23.954466   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:23.954472   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:23.968372   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:23.968382   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:23.987468   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:23.987478   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:23.998521   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:23.998532   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:24.010709   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:24.010719   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:24.022325   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:24.022337   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:24.059790   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:24.059803   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:24.103654   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:24.103665   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:24.118328   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:24.118339   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:24.134496   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:24.134507   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:24.151046   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:24.151057   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:24.162508   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:24.162519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:24.174110   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:24.174122   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:24.209235   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:24.209247   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:24.224037   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:24.224047   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:26.741364   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:28.507672   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:28.507711   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:31.743772   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:31.743902   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:31.762113   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:31.762197   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:31.773523   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:31.773595   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:31.788608   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:31.788677   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:31.799554   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:31.799628   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:31.811799   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:31.811873   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:31.822535   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:31.822607   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:31.832783   18442 logs.go:276] 0 containers: []
	W0819 04:34:31.832795   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:31.832852   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:31.843184   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:31.843202   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:31.843207   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:31.855621   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:31.855632   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:31.895345   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:31.895355   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:31.934367   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:31.934378   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:31.948209   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:31.948219   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:31.959505   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:31.959519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:31.971477   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:31.971487   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:31.992707   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:31.992717   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:32.018015   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:32.018027   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:32.022967   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:32.022976   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:32.038548   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:32.038563   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:32.052995   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:32.053006   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:32.067027   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:32.067037   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:32.078399   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:32.078412   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:32.089557   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:32.089568   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:32.124297   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:32.124310   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:32.143114   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:32.143127   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:33.509847   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:33.509961   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:33.523746   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:33.523822   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:33.535466   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:33.535535   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:33.548809   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:33.548885   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:33.559179   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:33.559248   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:33.569276   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:33.569345   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:33.579784   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:33.579852   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:33.594440   17996 logs.go:276] 0 containers: []
	W0819 04:34:33.594454   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:33.594517   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:33.605215   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:33.605230   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:33.605236   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:33.641169   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:33.641180   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:33.655384   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:33.655396   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:33.669385   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:33.669396   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:33.681619   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:33.681630   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:33.692713   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:33.692725   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:33.716271   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:33.716281   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:33.749411   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:33.749419   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:33.753804   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:33.753810   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:33.765112   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:33.765122   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:33.777384   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:33.777397   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:33.794805   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:33.794821   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:33.814534   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:33.814547   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:34.655407   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:36.328572   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:39.655684   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:39.655877   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:39.673989   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:39.674085   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:39.688664   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:39.688737   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:39.700665   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:39.700740   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:39.711415   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:39.711486   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:39.721216   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:39.721283   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:39.732286   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:39.732352   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:39.742096   18442 logs.go:276] 0 containers: []
	W0819 04:34:39.742109   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:39.742160   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:39.752805   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:39.752825   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:39.752830   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:39.767375   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:39.767384   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:39.784829   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:39.784838   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:39.819547   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:39.819557   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:39.833181   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:39.833197   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:39.844948   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:39.844958   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:39.860445   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:39.860455   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:39.885724   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:39.885735   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:39.899752   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:39.899762   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:39.938917   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:39.938932   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:39.949776   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:39.949788   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:39.961470   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:39.961480   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:39.972577   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:39.972589   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:40.011616   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:40.011625   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:40.016844   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:40.016852   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:40.034314   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:40.034328   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:40.045593   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:40.045604   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:42.558485   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:41.330771   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:41.330942   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:41.348599   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:41.348693   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:41.362250   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:41.362327   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:41.373265   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:41.373335   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:41.385269   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:41.385334   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:41.395552   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:41.395632   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:41.410646   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:41.410709   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:41.421085   17996 logs.go:276] 0 containers: []
	W0819 04:34:41.421097   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:41.421151   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:41.431696   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:41.431714   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:41.431720   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:41.445922   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:41.445935   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:41.459954   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:41.459963   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:41.495702   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:41.495713   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:41.500433   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:41.500441   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:41.536792   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:41.536804   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:41.549435   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:41.549447   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:41.572378   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:41.572389   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:41.590600   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:41.590614   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:41.615633   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:41.615645   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:41.627497   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:41.627507   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:41.641690   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:41.641703   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:41.655821   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:41.655834   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:44.175608   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:47.560749   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:47.560957   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:47.577548   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:47.577637   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:47.590867   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:47.590946   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:47.606573   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:47.606646   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:47.617803   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:47.617879   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:47.629212   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:47.629275   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:47.639908   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:47.639967   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:47.650490   18442 logs.go:276] 0 containers: []
	W0819 04:34:47.650500   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:47.650551   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:47.660970   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:47.660995   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:47.661001   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:47.695143   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:47.695156   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:47.706782   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:47.706795   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:47.730158   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:47.730167   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:47.734137   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:47.734145   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:47.748834   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:47.748847   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:47.764020   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:47.764030   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:47.775981   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:47.775992   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:47.790136   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:47.790147   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:47.809138   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:47.809150   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:47.820140   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:47.820151   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:47.833994   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:47.834005   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:47.847028   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:47.847040   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:47.859394   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:47.859405   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:47.899042   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:47.899051   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:47.937815   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:47.937829   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:47.949828   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:47.949839   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:49.178099   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:49.178413   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:49.216894   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:49.216990   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:49.232003   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:49.232093   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:49.244642   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:49.244729   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:49.256406   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:49.256474   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:49.268688   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:49.268760   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:49.281051   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:49.281122   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:49.292076   17996 logs.go:276] 0 containers: []
	W0819 04:34:49.292087   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:49.292141   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:49.303199   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:49.303215   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:49.303221   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:49.336698   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:49.336708   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:49.341159   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:49.341168   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:49.376614   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:49.376631   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:49.391721   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:49.391734   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:49.403643   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:49.403658   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:49.418160   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:49.418174   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:49.443262   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:49.443277   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:49.457474   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:49.457487   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:49.468881   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:49.468895   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:49.483593   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:49.483605   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:49.502226   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:49.502235   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:49.514675   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:49.514688   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:50.469275   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:52.028879   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:55.471684   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:55.471815   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:55.488517   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:55.488597   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:55.501300   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:55.501380   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:55.512479   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:55.512553   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:55.522950   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:55.523018   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:55.533283   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:55.533350   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:55.543771   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:55.543841   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:55.554479   18442 logs.go:276] 0 containers: []
	W0819 04:34:55.554493   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:55.554553   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:55.569630   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:55.569648   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:55.569654   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:55.611513   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:55.611523   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:55.625326   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:55.625336   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:55.639365   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:55.639375   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:55.676268   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:55.676279   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:55.690254   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:55.690264   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:55.703894   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:55.703904   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:55.721541   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:55.721551   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:55.736561   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:55.736582   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:55.752060   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:55.752070   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:55.777112   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:55.777123   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:55.789008   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:55.789022   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:55.793032   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:55.793041   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:55.831185   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:55.831198   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:55.845460   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:55.845473   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:55.856210   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:55.856220   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:55.871481   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:55.871492   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:58.384436   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:57.031497   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:57.031944   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:57.068725   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:34:57.068872   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:57.090465   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:34:57.090581   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:57.105607   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:34:57.105688   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:57.117939   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:34:57.118003   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:57.129238   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:34:57.129311   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:57.139752   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:34:57.139820   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:57.150333   17996 logs.go:276] 0 containers: []
	W0819 04:34:57.150344   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:57.150405   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:57.161340   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:34:57.161357   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:57.161363   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:57.165803   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:34:57.165811   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:34:57.185870   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:34:57.185883   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:34:57.198437   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:34:57.198448   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:34:57.213526   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:57.213542   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:57.246586   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:57.246594   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:57.288769   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:34:57.288787   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:34:57.302692   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:34:57.302707   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:34:57.314352   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:34:57.314363   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:34:57.326787   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:34:57.326799   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:34:57.344266   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:34:57.344278   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:34:57.355623   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:57.355634   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:57.380607   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:34:57.380616   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:59.894219   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:03.386640   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:03.386783   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:03.403575   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:03.403653   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:03.419259   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:03.419326   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:03.430348   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:03.430424   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:03.442142   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:03.442224   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:03.453047   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:03.453121   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:03.463549   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:03.463621   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:03.474093   18442 logs.go:276] 0 containers: []
	W0819 04:35:03.474105   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:03.474159   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:03.484690   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:03.484709   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:03.484715   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:03.520753   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:03.520766   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:03.540239   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:03.540251   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:03.555293   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:03.555303   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:03.580655   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:03.580666   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:03.595111   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:03.595124   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:03.609681   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:03.609691   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:03.620873   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:03.620884   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:03.632546   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:03.632557   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:03.644283   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:03.644296   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:03.648613   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:03.648622   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:03.687904   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:03.687915   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:03.705455   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:03.705465   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:03.716783   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:03.716793   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:03.728984   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:03.728997   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:04.894664   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:04.894907   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:04.918671   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:04.918774   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:04.936050   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:04.936126   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:04.952848   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:04.952942   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:04.963660   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:04.963729   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:04.974089   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:04.974158   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:04.984546   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:04.984616   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:04.998715   17996 logs.go:276] 0 containers: []
	W0819 04:35:04.998726   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:04.998780   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:05.009317   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:05.009332   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:05.009340   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:05.021045   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:05.021056   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:05.033067   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:05.033080   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:05.045364   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:05.045377   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:05.062312   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:05.062322   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:05.087518   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:05.087530   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:05.099281   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:05.099294   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:05.143019   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:05.143033   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:05.148111   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:05.148119   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:05.162834   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:05.162848   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:05.178677   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:05.178688   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:05.197716   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:05.197727   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:05.215211   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:05.215224   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:03.769323   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:03.769336   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:03.781313   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:03.781322   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:06.297846   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:07.750856   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:11.298240   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:11.298407   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:11.315941   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:11.316037   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:11.329774   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:11.329845   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:11.340745   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:11.340809   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:11.351124   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:11.351190   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:11.361978   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:11.362050   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:11.372315   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:11.372378   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:11.383279   18442 logs.go:276] 0 containers: []
	W0819 04:35:11.383292   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:11.383352   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:11.394062   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:11.394079   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:11.394083   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:11.431692   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:11.431703   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:11.436429   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:11.436436   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:11.449174   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:11.449184   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:11.473333   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:11.473342   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:11.491002   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:11.491016   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:11.508688   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:11.508698   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:11.523049   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:11.523061   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:11.559359   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:11.559370   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:11.581347   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:11.581357   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:11.619761   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:11.619776   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:11.634493   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:11.634503   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:11.647260   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:11.647271   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:11.658209   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:11.658223   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:11.670047   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:11.670057   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:11.683942   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:11.683952   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:11.695508   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:11.695520   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:12.751948   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:12.752064   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:12.765697   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:12.765763   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:12.777846   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:12.777920   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:12.788299   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:12.788378   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:12.799496   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:12.799571   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:12.809932   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:12.810005   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:12.822472   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:12.822541   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:12.832638   17996 logs.go:276] 0 containers: []
	W0819 04:35:12.832649   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:12.832706   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:12.843352   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:12.843367   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:12.843372   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:12.855625   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:12.855636   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:12.880613   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:12.880621   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:12.892064   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:12.892076   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:12.927540   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:12.927548   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:12.932652   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:12.932659   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:12.947525   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:12.947539   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:12.961473   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:12.961486   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:12.973387   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:12.973401   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:13.008103   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:13.008118   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:13.019843   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:13.019855   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:13.034541   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:13.034554   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:13.046182   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:13.046192   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:14.210357   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:15.565769   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:19.212651   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:19.212929   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:19.238359   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:19.238471   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:19.260197   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:19.260300   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:19.272311   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:19.272380   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:19.283833   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:19.283903   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:19.297832   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:19.297896   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:19.308193   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:19.308266   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:19.318281   18442 logs.go:276] 0 containers: []
	W0819 04:35:19.318293   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:19.318353   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:19.333629   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:19.333651   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:19.333657   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:19.371102   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:19.371117   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:19.406698   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:19.406712   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:19.444716   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:19.444734   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:19.456664   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:19.456676   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:19.469183   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:19.469193   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:19.486088   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:19.486098   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:19.490681   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:19.490690   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:19.505406   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:19.505415   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:19.516955   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:19.516967   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:19.533991   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:19.534004   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:19.548532   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:19.548545   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:19.560063   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:19.560076   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:19.571467   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:19.571481   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:19.587714   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:19.587724   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:19.601821   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:19.601833   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:19.625957   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:19.625969   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:22.141359   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:20.566216   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:20.566448   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:20.595653   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:20.595764   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:20.613319   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:20.613401   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:20.628309   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:20.628386   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:20.640230   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:20.640302   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:20.654711   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:20.654782   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:20.670051   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:20.670119   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:20.680403   17996 logs.go:276] 0 containers: []
	W0819 04:35:20.680415   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:20.680474   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:20.691458   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:20.691475   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:20.691481   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:20.726415   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:20.726426   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:20.762103   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:20.762116   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:20.776583   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:20.776596   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:20.788856   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:20.788866   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:20.801993   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:20.802005   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:20.816908   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:20.816919   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:20.830312   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:20.830325   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:20.855408   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:20.855415   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:20.867113   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:20.867124   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:20.872094   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:20.872100   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:20.886421   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:20.886431   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:20.906328   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:20.906343   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:23.420194   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:27.143607   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:27.143997   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:27.181552   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:27.181662   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:27.201160   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:27.201247   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:27.214349   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:27.214418   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:27.226533   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:27.226608   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:27.239328   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:27.239393   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:27.255257   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:27.255330   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:27.265149   18442 logs.go:276] 0 containers: []
	W0819 04:35:27.265160   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:27.265219   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:27.276243   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:27.276260   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:27.276265   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:27.299393   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:27.299402   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:27.303911   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:27.303918   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:27.319487   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:27.319497   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:27.336707   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:27.336716   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:27.349570   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:27.349579   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:27.361048   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:27.361058   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:27.373323   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:27.373333   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:27.385185   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:27.385198   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:27.397134   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:27.397146   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:27.433199   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:27.433209   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:27.471924   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:27.471935   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:27.486484   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:27.486493   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:27.497819   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:27.497831   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:27.512676   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:27.512689   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:27.551755   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:27.551766   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:27.566211   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:27.566223   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:28.422375   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:28.422545   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:28.440511   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:28.440600   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:28.454004   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:28.454082   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:28.464980   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:28.465047   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:28.475380   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:28.475451   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:28.485756   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:28.485825   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:28.496340   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:28.496417   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:28.512180   17996 logs.go:276] 0 containers: []
	W0819 04:35:28.512193   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:28.512256   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:28.523007   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:28.523022   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:28.523027   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:28.557966   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:28.557979   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:28.563407   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:28.563416   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:28.598880   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:28.598890   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:28.613313   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:28.613326   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:28.628465   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:28.628478   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:28.640414   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:28.640428   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:28.663847   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:28.663859   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:28.677673   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:28.677684   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:28.689240   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:28.689251   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:28.700783   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:28.700795   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:28.718430   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:28.718439   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:28.729809   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:28.729822   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:30.084610   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:31.243122   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:35.086846   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:35.087030   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:35.106085   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:35.106183   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:35.125365   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:35.125439   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:35.136536   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:35.136610   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:35.148428   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:35.148503   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:35.161947   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:35.162014   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:35.172461   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:35.172540   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:35.183237   18442 logs.go:276] 0 containers: []
	W0819 04:35:35.183248   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:35.183307   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:35.197723   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:35.197744   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:35.197750   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:35.216259   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:35.216270   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:35.230912   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:35.230922   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:35.242515   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:35.242526   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:35.265868   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:35.265882   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:35.278371   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:35.278384   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:35.290519   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:35.290531   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:35.307948   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:35.307958   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:35.345850   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:35.345861   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:35.350077   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:35.350083   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:35.388014   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:35.388024   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:35.403028   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:35.403042   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:35.416711   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:35.416726   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:35.428563   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:35.428576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:35.454801   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:35.454814   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:35.490300   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:35.490310   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:35.505361   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:35.505372   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:38.021883   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:36.245341   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:36.245592   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:36.263756   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:36.263840   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:36.281832   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:36.281904   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:36.293200   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:36.293274   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:36.304770   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:36.304838   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:36.315355   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:36.315426   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:36.328114   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:36.328183   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:36.342599   17996 logs.go:276] 0 containers: []
	W0819 04:35:36.342610   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:36.342664   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:36.352667   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:36.352682   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:36.352687   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:36.366961   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:36.366973   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:36.378410   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:36.378421   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:36.389514   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:36.389529   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:36.401164   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:36.401178   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:36.418748   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:36.418761   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:36.453849   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:36.453861   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:36.458299   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:36.458309   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:36.522506   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:36.522520   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:36.546958   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:36.546968   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:36.558786   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:36.558800   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:36.573552   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:36.573565   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:36.588740   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:36.588754   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:39.107542   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:43.024207   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:43.024501   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:43.049151   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:43.049282   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:43.066131   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:43.066224   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:43.079722   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:43.079799   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:43.091123   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:43.091190   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:43.101649   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:43.101718   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:43.111815   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:43.111888   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:43.122631   18442 logs.go:276] 0 containers: []
	W0819 04:35:43.122643   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:43.122698   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:43.144952   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:43.144969   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:43.144975   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:43.160088   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:43.160100   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:43.174306   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:43.174317   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:43.192287   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:43.192301   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:43.203453   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:43.203467   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:43.215532   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:43.215546   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:43.219894   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:43.219901   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:43.237222   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:43.237237   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:43.257884   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:43.257898   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:43.269363   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:43.269374   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:43.293432   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:43.293446   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:43.305151   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:43.305162   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:43.345881   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:43.345891   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:43.360580   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:43.360590   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:43.398854   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:43.398870   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:43.435533   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:43.435544   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:43.452854   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:43.452864   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:44.109806   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:44.110005   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:44.128685   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:44.128780   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:44.142903   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:44.142979   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:44.154929   17996 logs.go:276] 2 containers: [a7da7cc69ccc b382a541c256]
	I0819 04:35:44.155002   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:44.165679   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:44.165739   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:44.176008   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:44.176082   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:44.186609   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:44.186681   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:44.205437   17996 logs.go:276] 0 containers: []
	W0819 04:35:44.205447   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:44.205504   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:44.215966   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:44.215981   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:44.215987   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:44.230180   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:44.230191   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:44.242507   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:44.242519   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:44.254236   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:44.254248   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:44.265310   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:44.265320   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:44.305495   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:44.305506   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:44.319810   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:44.319821   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:44.331995   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:44.332009   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:44.352647   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:44.352659   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:44.370167   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:44.370177   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:44.395000   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:44.395012   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:44.407434   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:44.407446   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:44.444806   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:44.444816   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:45.968600   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:46.951578   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:50.970741   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:50.970936   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:50.987870   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:50.987957   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:51.001054   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:51.001123   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:51.011974   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:51.012041   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:51.022813   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:51.022885   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:51.033845   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:51.033913   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:51.044901   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:51.044968   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:51.054770   18442 logs.go:276] 0 containers: []
	W0819 04:35:51.054782   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:51.054838   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:51.065677   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:51.065694   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:51.065700   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:51.077353   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:51.077364   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:51.088848   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:51.088864   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:51.106707   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:51.106718   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:51.129853   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:51.129862   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:51.142072   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:51.142086   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:51.178534   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:51.178546   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:51.193553   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:51.193566   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:51.207313   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:51.207325   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:51.219608   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:51.219619   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:51.231368   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:51.231381   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:51.235989   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:51.235996   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:51.250871   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:51.250881   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:51.264682   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:51.264692   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:51.279413   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:51.279424   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:51.293580   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:51.293590   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:51.331289   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:51.331298   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:51.953805   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:51.954002   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:51.972092   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:51.972177   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:51.986263   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:51.986338   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:51.997589   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:35:51.997667   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:52.010621   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:52.010688   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:52.021510   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:52.021575   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:52.031975   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:52.032051   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:52.042819   17996 logs.go:276] 0 containers: []
	W0819 04:35:52.042832   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:52.042892   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:52.053209   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:52.053229   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:52.053235   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:52.065101   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:35:52.065113   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:35:52.080169   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:52.080184   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:52.084659   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:35:52.084668   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:35:52.096172   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:35:52.096183   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:35:52.107536   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:35:52.107548   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:35:52.119593   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:52.119605   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:52.130881   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:52.130892   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:52.164400   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:52.164407   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:52.200089   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:35:52.200102   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:35:52.214316   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:52.214330   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:52.228924   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:52.228935   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:52.252976   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:52.252986   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:52.264162   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:35:52.264175   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:35:52.277572   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:35:52.277584   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:35:54.801222   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:53.870932   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:59.803333   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:59.803535   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:59.824583   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:35:59.824685   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:59.840687   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:35:59.840771   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:59.853172   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:35:59.853241   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:59.864551   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:35:59.864622   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:59.875249   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:35:59.875319   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:59.886274   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:35:59.886343   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:59.896229   17996 logs.go:276] 0 containers: []
	W0819 04:35:59.896243   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:59.896299   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:59.911375   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:35:59.911393   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:59.911399   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:59.935215   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:35:59.935228   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:35:59.949772   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:35:59.949785   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:35:59.961418   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:35:59.961429   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:35:59.973844   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:35:59.973856   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:35:59.985941   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:35:59.985952   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:59.997648   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:59.997659   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:00.036944   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:00.036955   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:00.049170   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:00.049182   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:00.072999   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:00.073010   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:00.107494   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:00.107516   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:00.125334   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:00.125347   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:00.136949   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:00.136964   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:00.149068   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:00.149082   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:00.163990   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:00.164000   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:58.873168   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:58.873377   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:58.893945   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:58.894034   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:58.909226   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:58.909308   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:58.921957   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:58.922033   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:58.932710   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:58.932780   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:58.942808   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:58.942875   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:58.953323   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:58.953386   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:58.963461   18442 logs.go:276] 0 containers: []
	W0819 04:35:58.963475   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:58.963533   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:58.974301   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:58.974320   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:58.974325   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:58.988423   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:58.988432   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:59.002070   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:59.002080   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:59.013444   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:59.013454   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:59.050758   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:59.050773   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:59.085368   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:59.085380   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:59.099914   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:59.099925   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:59.111682   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:59.111692   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:59.115995   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:59.116006   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:59.127422   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:59.127432   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:59.144496   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:59.144508   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:59.156128   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:59.156142   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:59.171672   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:59.171687   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:59.183487   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:59.183500   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:59.198655   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:59.198668   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:59.237678   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:59.237691   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:59.250894   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:59.250905   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:01.776258   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:02.670643   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:06.778663   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:06.779099   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:06.816493   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:06.816628   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:06.837682   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:06.837773   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:06.852095   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:06.852166   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:06.864610   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:06.864678   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:06.875301   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:06.875363   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:06.889501   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:06.889572   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:06.899837   18442 logs.go:276] 0 containers: []
	W0819 04:36:06.899854   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:06.899909   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:06.910347   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:06.910370   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:06.910376   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:06.915038   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:06.915046   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:06.960768   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:06.960778   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:06.972350   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:06.972360   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:06.984381   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:06.984395   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:06.996787   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:06.996802   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:07.020703   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:07.020714   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:07.059159   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:07.059170   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:07.098814   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:07.098825   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:07.114994   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:07.115007   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:07.126691   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:07.126701   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:07.142553   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:07.142566   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:07.157409   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:07.157423   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:07.171153   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:07.171164   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:07.187543   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:07.187552   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:07.206144   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:07.206154   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:07.217548   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:07.217561   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:07.672866   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:07.672987   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:07.685968   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:07.686044   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:07.697214   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:07.697294   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:07.708092   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:07.708162   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:07.719969   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:07.720043   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:07.731257   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:07.731322   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:07.742105   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:07.742176   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:07.752461   17996 logs.go:276] 0 containers: []
	W0819 04:36:07.752471   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:07.752525   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:07.765471   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:07.765488   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:07.765494   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:07.769939   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:07.769948   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:07.783805   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:07.783814   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:07.795643   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:07.795659   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:07.813109   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:07.813123   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:07.839025   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:07.839042   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:07.850915   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:07.850931   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:07.885559   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:07.885572   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:07.897630   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:07.897642   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:07.912877   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:07.912889   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:07.924569   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:07.924580   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:07.957910   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:07.957920   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:07.971337   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:07.971350   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:07.982395   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:07.982406   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:07.994601   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:07.994615   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:09.734595   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:10.507752   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:14.736539   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:14.736803   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:14.765430   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:14.765566   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:14.786094   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:14.786185   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:14.801434   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:14.801509   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:14.812557   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:14.812626   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:14.828034   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:14.828104   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:14.838564   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:14.838628   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:14.848280   18442 logs.go:276] 0 containers: []
	W0819 04:36:14.848295   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:14.848345   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:14.859018   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:14.859035   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:14.859041   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:14.878394   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:14.878405   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:14.893040   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:14.893050   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:14.904677   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:14.904687   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:14.916395   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:14.916404   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:14.927748   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:14.927758   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:14.939356   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:14.939368   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:14.943357   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:14.943363   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:14.981057   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:14.981068   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:14.995863   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:14.995873   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:15.030977   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:15.030989   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:15.045648   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:15.045658   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:15.063175   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:15.063185   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:15.077814   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:15.077824   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:15.093881   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:15.093891   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:15.116836   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:15.116849   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:15.153655   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:15.153666   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:17.669766   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:15.508298   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:15.508465   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:15.523322   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:15.523404   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:15.534191   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:15.534262   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:15.545064   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:15.545138   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:15.555145   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:15.555214   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:15.571206   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:15.571283   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:15.583602   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:15.583676   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:15.598623   17996 logs.go:276] 0 containers: []
	W0819 04:36:15.598640   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:15.598701   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:15.613082   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:15.613102   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:15.613107   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:15.635858   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:15.635868   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:15.648195   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:15.648209   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:15.659827   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:15.659837   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:15.698815   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:15.698830   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:15.703796   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:15.703803   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:15.718325   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:15.718335   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:15.729794   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:15.729805   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:15.754230   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:15.754244   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:15.766432   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:15.766447   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:15.803178   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:15.803194   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:15.814730   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:15.814740   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:15.830091   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:15.830100   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:15.847825   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:15.847841   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:15.861384   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:15.861395   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:18.380755   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:22.670082   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:22.670243   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:22.681466   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:22.681534   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:22.692029   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:22.692103   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:22.702276   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:22.702345   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:22.716251   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:22.716325   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:22.727519   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:22.727591   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:22.738461   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:22.738531   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:22.749100   18442 logs.go:276] 0 containers: []
	W0819 04:36:22.749113   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:22.749172   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:22.760205   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:22.760222   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:22.760227   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:22.777206   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:22.777219   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:22.794888   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:22.794900   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:22.806566   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:22.806576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:22.818555   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:22.818566   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:22.830469   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:22.830481   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:22.834965   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:22.834971   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:22.853574   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:22.853585   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:22.868266   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:22.868278   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:22.879425   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:22.879438   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:22.893712   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:22.893720   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:22.907951   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:22.907960   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:22.919135   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:22.919145   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:22.935836   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:22.935846   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:22.959864   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:22.959875   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:22.999069   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:22.999078   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:23.034763   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:23.034776   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:23.382667   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:23.382770   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:23.393640   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:23.393708   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:23.404100   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:23.404168   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:23.415036   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:23.415114   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:23.432364   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:23.432426   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:23.443191   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:23.443258   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:23.454123   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:23.454187   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:23.464538   17996 logs.go:276] 0 containers: []
	W0819 04:36:23.464551   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:23.464613   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:23.476975   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:23.476994   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:23.476999   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:23.481438   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:23.481445   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:23.495872   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:23.495885   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:23.508501   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:23.508513   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:23.533363   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:23.533373   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:23.565658   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:23.565666   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:23.605709   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:23.605720   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:23.620207   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:23.620219   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:23.632818   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:23.632831   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:23.644575   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:23.644589   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:23.658486   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:23.658498   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:23.671213   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:23.671226   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:23.689074   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:23.689086   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:23.700743   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:23.700753   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:23.713102   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:23.713116   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:25.576147   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:26.227327   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:30.578306   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:30.578537   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:30.592779   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:30.592863   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:30.604225   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:30.604290   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:30.614641   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:30.614709   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:30.625466   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:30.625543   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:30.635989   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:30.636055   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:30.646508   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:30.646575   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:30.656839   18442 logs.go:276] 0 containers: []
	W0819 04:36:30.656850   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:30.656902   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:30.667188   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:30.667207   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:30.667213   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:30.679115   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:30.679126   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:30.691259   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:30.691270   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:30.730998   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:30.731015   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:30.749134   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:30.749145   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:30.763878   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:30.763887   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:30.783295   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:30.783305   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:30.803535   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:30.803544   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:30.815305   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:30.815316   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:30.830504   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:30.830519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:30.842638   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:30.842648   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:30.865124   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:30.865132   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:30.877033   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:30.877044   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:30.881637   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:30.881646   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:30.916185   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:30.916200   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:30.954182   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:30.954192   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:30.970554   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:30.970564   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:33.484189   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:31.228275   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:31.228467   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:31.246165   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:31.246258   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:31.260191   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:31.260264   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:31.275132   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:31.275214   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:31.289449   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:31.289520   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:31.299873   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:31.299949   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:31.316874   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:31.316943   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:31.327155   17996 logs.go:276] 0 containers: []
	W0819 04:36:31.327167   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:31.327220   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:31.337830   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:31.337848   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:31.337854   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:31.372715   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:31.372724   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:31.384104   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:31.384115   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:31.409399   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:31.409407   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:31.421126   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:31.421138   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:31.461302   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:31.461316   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:31.501109   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:31.501122   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:31.515900   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:31.515914   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:31.527594   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:31.527607   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:31.539388   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:31.539398   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:31.551184   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:31.551200   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:31.555987   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:31.555993   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:31.570436   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:31.570448   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:31.582856   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:31.582867   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:31.600718   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:31.600731   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:34.117362   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:38.486325   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:38.486524   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:38.500365   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:38.500445   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:38.511240   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:38.511313   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:38.522495   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:38.522561   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:38.533228   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:38.533312   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:38.543819   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:38.543890   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:38.554271   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:38.554344   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:38.564166   18442 logs.go:276] 0 containers: []
	W0819 04:36:38.564176   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:38.564240   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:38.574346   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:38.574366   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:38.574372   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:38.613590   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:38.613600   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:38.651489   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:38.651501   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:38.665927   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:38.665940   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:38.677819   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:38.677830   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:38.715936   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:38.715947   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:38.734796   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:38.734809   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:38.750501   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:38.750511   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:39.118676   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:39.118855   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:39.133144   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:39.133231   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:39.144787   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:39.144851   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:39.156128   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:39.156205   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:39.172347   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:39.172412   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:39.183031   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:39.183101   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:39.193236   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:39.193301   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:39.204884   17996 logs.go:276] 0 containers: []
	W0819 04:36:39.204896   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:39.204960   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:39.217686   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:39.217703   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:39.217709   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:39.229564   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:39.229577   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:39.241676   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:39.241690   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:39.260655   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:39.260664   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:39.281661   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:39.281674   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:39.286278   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:39.286287   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:39.300600   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:39.300610   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:39.314946   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:39.314959   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:39.330030   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:39.330041   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:39.355239   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:39.355247   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:39.389903   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:39.389915   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:39.401528   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:39.401542   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:39.436792   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:39.436803   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:39.449094   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:39.449107   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:39.461064   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:39.461078   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:38.774033   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:38.774052   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:38.789702   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:38.789713   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:38.801721   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:38.801735   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:38.820129   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:38.820140   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:38.831543   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:38.831555   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:38.835625   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:38.835631   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:38.849990   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:38.850001   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:38.865129   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:38.865139   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:38.876803   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:38.876818   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:41.389245   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:41.976699   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:46.391587   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:46.391795   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:46.415636   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:46.415721   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:46.429521   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:46.429598   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:46.440589   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:46.440661   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:46.450727   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:46.450795   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:46.461592   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:46.461670   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:46.472657   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:46.472725   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:46.484111   18442 logs.go:276] 0 containers: []
	W0819 04:36:46.484126   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:46.484186   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:46.494305   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:46.494322   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:46.494327   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:46.516169   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:46.516181   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:46.528185   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:46.528198   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:46.532771   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:46.532780   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:46.547255   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:46.547265   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:46.558856   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:46.558867   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:46.579630   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:46.579643   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:46.602412   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:46.602425   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:46.634454   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:46.634468   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:46.646597   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:46.646608   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:46.680551   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:46.680563   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:46.694548   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:46.694560   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:46.732774   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:46.732785   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:46.745178   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:46.745190   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:46.784313   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:46.784326   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:46.799456   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:46.799470   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:46.823396   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:46.823407   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:46.978897   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:46.979029   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:46.992476   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:46.992556   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:47.004813   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:47.004880   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:47.015901   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:47.015971   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:47.026339   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:47.026409   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:47.037603   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:47.037667   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:47.049031   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:47.049096   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:47.058993   17996 logs.go:276] 0 containers: []
	W0819 04:36:47.059006   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:47.059067   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:47.080560   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:47.080583   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:47.080589   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:47.085155   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:47.085162   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:47.096805   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:47.096815   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:47.108453   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:47.108466   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:47.123017   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:47.123029   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:47.134615   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:47.134628   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:47.152323   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:47.152338   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:47.163922   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:47.163932   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:47.180611   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:47.180623   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:47.192539   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:47.192555   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:47.208295   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:47.208307   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:47.233132   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:47.233141   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:47.267342   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:47.267355   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:47.305743   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:47.305756   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:47.317298   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:47.317310   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:49.834890   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:49.336747   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:54.837140   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:54.837234   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:54.848286   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:36:54.848372   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:54.861188   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:36:54.861256   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:54.871608   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:36:54.871683   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:54.882667   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:36:54.882735   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:54.896384   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:36:54.896459   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:54.907582   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:36:54.907654   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:54.917848   17996 logs.go:276] 0 containers: []
	W0819 04:36:54.917859   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:54.917914   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:54.928529   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:36:54.928547   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:36:54.928552   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:36:54.939777   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:36:54.939788   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:36:54.953466   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:36:54.953478   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:36:54.967441   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:36:54.967456   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:36:54.978970   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:36:54.978980   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:36:54.990725   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:54.990739   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:55.014368   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:55.014379   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:55.050907   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:36:55.050920   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:36:55.062902   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:36:55.062915   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:36:55.074551   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:55.074564   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:55.109609   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:36:55.109623   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:36:55.121697   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:36:55.121709   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:36:55.136374   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:36:55.136388   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:36:55.154378   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:36:55.154387   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:55.166286   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:55.166298   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:54.338901   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:54.339106   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:54.359500   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:54.359603   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:54.374475   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:54.374560   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:54.386261   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:54.386324   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:54.396909   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:54.396970   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:54.407675   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:54.407743   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:54.419057   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:54.419129   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:54.429210   18442 logs.go:276] 0 containers: []
	W0819 04:36:54.429223   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:54.429280   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:54.439680   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:54.439699   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:54.439705   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:54.475009   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:54.475021   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:54.489411   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:54.489425   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:54.528600   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:54.528614   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:54.540016   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:54.540029   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:54.551532   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:54.551544   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:54.572659   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:54.572667   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:54.611101   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:54.611115   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:54.630067   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:54.630079   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:54.641699   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:54.641711   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:54.653919   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:54.653932   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:54.665417   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:54.665427   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:54.682505   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:54.682518   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:54.698612   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:54.698622   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:54.702583   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:54.702592   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:54.720439   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:54.720450   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:54.736526   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:54.736539   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:57.249533   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:57.673556   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:02.251856   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:02.251996   18442 kubeadm.go:597] duration metric: took 4m3.883317042s to restartPrimaryControlPlane
	W0819 04:37:02.252137   18442 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:37:02.252194   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:37:03.328188   18442 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076003041s)
	I0819 04:37:03.328244   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:37:03.333386   18442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:37:03.336263   18442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:37:03.339005   18442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:37:03.339011   18442 kubeadm.go:157] found existing configuration files:
	
	I0819 04:37:03.339034   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf
	I0819 04:37:03.341808   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:37:03.341829   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:37:03.344597   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf
	I0819 04:37:03.347807   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:37:03.347825   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:37:03.350487   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf
	I0819 04:37:03.353090   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:37:03.353116   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:37:03.356399   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf
	I0819 04:37:03.359853   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:37:03.359878   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:37:03.362770   18442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:37:03.381476   18442 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:37:03.381507   18442 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:37:03.432333   18442 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:37:03.432393   18442 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:37:03.432450   18442 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:37:03.485102   18442 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:37:03.493291   18442 out.go:235]   - Generating certificates and keys ...
	I0819 04:37:03.493325   18442 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:37:03.493353   18442 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:37:03.493425   18442 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:37:03.493485   18442 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:37:03.493555   18442 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:37:03.493583   18442 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:37:03.493633   18442 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:37:03.493665   18442 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:37:03.493706   18442 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:37:03.493745   18442 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:37:03.493763   18442 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:37:03.493795   18442 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:37:03.611579   18442 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:37:03.725338   18442 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:37:03.770146   18442 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:37:03.926758   18442 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:37:03.957772   18442 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:37:03.958139   18442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:37:03.958178   18442 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:37:04.043932   18442 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:37:02.674602   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:02.674711   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:02.686571   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:02.686641   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:02.705203   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:02.705281   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:02.717481   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:02.717562   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:02.729390   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:02.729464   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:02.740420   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:02.740493   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:02.752609   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:02.752678   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:02.765191   17996 logs.go:276] 0 containers: []
	W0819 04:37:02.765204   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:02.765262   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:02.777261   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:02.777280   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:02.777288   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:02.814400   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:02.814420   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:02.830464   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:02.830477   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:02.852083   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:02.852094   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:02.865081   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:02.865093   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:02.884799   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:02.884817   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:02.910364   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:02.910376   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:02.922577   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:02.922592   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:02.927381   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:02.927390   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:02.942123   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:02.942136   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:02.955025   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:02.955036   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:02.968208   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:02.968220   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:02.983897   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:02.983912   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:02.997928   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:02.997941   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:03.036399   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:03.036412   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:04.048131   18442 out.go:235]   - Booting up control plane ...
	I0819 04:37:04.048179   18442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:37:04.048222   18442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:37:04.048262   18442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:37:04.048306   18442 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:37:04.048405   18442 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:37:09.044540   18442 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002590 seconds
	I0819 04:37:09.044656   18442 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:37:09.049172   18442 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:37:09.565382   18442 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:37:09.565714   18442 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-783000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:37:10.074205   18442 kubeadm.go:310] [bootstrap-token] Using token: rv7b32.4t5lzmukqj5o3yq7
	I0819 04:37:10.080652   18442 out.go:235]   - Configuring RBAC rules ...
	I0819 04:37:10.080783   18442 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:37:10.080943   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:37:10.087791   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:37:10.089227   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:37:10.090963   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:37:10.092416   18442 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:37:10.097161   18442 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:37:10.277847   18442 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:37:10.480613   18442 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:37:10.481044   18442 kubeadm.go:310] 
	I0819 04:37:10.481079   18442 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:37:10.481086   18442 kubeadm.go:310] 
	I0819 04:37:10.481141   18442 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:37:10.481165   18442 kubeadm.go:310] 
	I0819 04:37:10.481184   18442 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:37:10.481264   18442 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:37:10.481305   18442 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:37:10.481325   18442 kubeadm.go:310] 
	I0819 04:37:10.481362   18442 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:37:10.481369   18442 kubeadm.go:310] 
	I0819 04:37:10.481403   18442 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:37:10.481410   18442 kubeadm.go:310] 
	I0819 04:37:10.481446   18442 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:37:10.481490   18442 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:37:10.481575   18442 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:37:10.481613   18442 kubeadm.go:310] 
	I0819 04:37:10.481710   18442 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:37:10.481773   18442 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:37:10.481777   18442 kubeadm.go:310] 
	I0819 04:37:10.481829   18442 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rv7b32.4t5lzmukqj5o3yq7 \
	I0819 04:37:10.481881   18442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 \
	I0819 04:37:10.481893   18442 kubeadm.go:310] 	--control-plane 
	I0819 04:37:10.481895   18442 kubeadm.go:310] 
	I0819 04:37:10.481987   18442 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:37:10.481993   18442 kubeadm.go:310] 
	I0819 04:37:10.482049   18442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rv7b32.4t5lzmukqj5o3yq7 \
	I0819 04:37:10.482122   18442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 
	I0819 04:37:10.482196   18442 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:37:10.482207   18442 cni.go:84] Creating CNI manager for ""
	I0819 04:37:10.482216   18442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:37:10.489180   18442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:37:05.553358   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:10.493235   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:37:10.496646   18442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:37:10.501609   18442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:37:10.501658   18442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-783000 minikube.k8s.io/updated_at=2024_08_19T04_37_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=stopped-upgrade-783000 minikube.k8s.io/primary=true
	I0819 04:37:10.501658   18442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:37:10.509249   18442 ops.go:34] apiserver oom_adj: -16
	I0819 04:37:10.536193   18442 kubeadm.go:1113] duration metric: took 34.572958ms to wait for elevateKubeSystemPrivileges
	I0819 04:37:10.544660   18442 kubeadm.go:394] duration metric: took 4m12.191839417s to StartCluster
	I0819 04:37:10.544680   18442 settings.go:142] acquiring lock: {Name:mk0efade08e7fded56aa74c9b61139ee991f6648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:37:10.544774   18442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:37:10.545214   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:37:10.545436   18442 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:37:10.545535   18442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:37:10.545480   18442 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:37:10.545551   18442 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-783000"
	I0819 04:37:10.545564   18442 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-783000"
	I0819 04:37:10.545579   18442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-783000"
	I0819 04:37:10.545569   18442 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-783000"
	W0819 04:37:10.545594   18442 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:37:10.545604   18442 host.go:66] Checking if "stopped-upgrade-783000" exists ...
	I0819 04:37:10.548167   18442 out.go:177] * Verifying Kubernetes components...
	I0819 04:37:10.548860   18442 kapi.go:59] client config for stopped-upgrade-783000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021bd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:37:10.552503   18442 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-783000"
	W0819 04:37:10.552508   18442 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:37:10.552519   18442 host.go:66] Checking if "stopped-upgrade-783000" exists ...
	I0819 04:37:10.553097   18442 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:37:10.553103   18442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:37:10.553108   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:37:10.556191   18442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:37:10.562340   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:37:10.566227   18442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:37:10.566239   18442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:37:10.566250   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:37:10.657907   18442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:37:10.663451   18442 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:37:10.663501   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:37:10.667554   18442 api_server.go:72] duration metric: took 122.106459ms to wait for apiserver process to appear ...
	I0819 04:37:10.667563   18442 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:37:10.667571   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:10.710813   18442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:37:10.749912   18442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:37:11.110601   18442 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:37:11.110615   18442 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:37:10.553574   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:10.553641   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:10.579563   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:10.579636   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:10.607928   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:10.608003   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:10.620002   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:10.620074   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:10.630865   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:10.630933   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:10.641486   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:10.641556   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:10.652967   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:10.653034   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:10.671455   17996 logs.go:276] 0 containers: []
	W0819 04:37:10.671464   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:10.671509   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:10.681645   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:10.681663   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:10.681668   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:10.693256   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:10.693265   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:10.710959   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:10.710971   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:10.723349   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:10.723361   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:10.736385   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:10.736398   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:10.749006   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:10.749022   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:10.765802   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:10.765815   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:10.792188   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:10.792208   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:10.797524   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:10.797535   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:10.840458   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:10.840473   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:10.857358   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:10.857372   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:10.869643   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:10.869654   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:10.904293   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:10.904309   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:10.919561   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:10.919579   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:10.934340   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:10.934355   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:13.454595   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:15.669479   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:15.669531   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:18.456858   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:18.457261   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:18.489185   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:18.489320   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:18.509043   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:18.509130   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:18.524059   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:18.524144   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:18.536454   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:18.536527   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:18.548106   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:18.548170   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:18.558919   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:18.558979   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:18.579873   17996 logs.go:276] 0 containers: []
	W0819 04:37:18.579885   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:18.579950   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:18.599808   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:18.599828   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:18.599833   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:18.613108   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:18.613121   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:18.625603   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:18.625615   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:18.661172   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:18.661180   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:18.697458   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:18.697469   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:18.710318   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:18.710329   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:18.722889   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:18.722901   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:18.727684   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:18.727692   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:18.739608   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:18.739619   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:18.762421   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:18.762430   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:18.780340   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:18.780350   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:18.794055   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:18.794066   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:18.805783   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:18.805792   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:18.823515   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:18.823526   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:18.837789   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:18.837802   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:20.669828   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:20.669869   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:21.350701   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:25.670230   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:25.670280   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:26.352929   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:26.353203   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:37:26.371980   17996 logs.go:276] 1 containers: [a87ae25c67c2]
	I0819 04:37:26.372069   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:37:26.385859   17996 logs.go:276] 1 containers: [f455d8b0d489]
	I0819 04:37:26.385936   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:37:26.397668   17996 logs.go:276] 4 containers: [475e9da6b182 b3828a6a9ea6 a7da7cc69ccc b382a541c256]
	I0819 04:37:26.397740   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:37:26.412957   17996 logs.go:276] 1 containers: [9a7de925ae0b]
	I0819 04:37:26.413028   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:37:26.423939   17996 logs.go:276] 1 containers: [8a9318577d16]
	I0819 04:37:26.424013   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:37:26.434819   17996 logs.go:276] 1 containers: [9aa4015a0f4e]
	I0819 04:37:26.434889   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:37:26.445348   17996 logs.go:276] 0 containers: []
	W0819 04:37:26.445360   17996 logs.go:278] No container was found matching "kindnet"
	I0819 04:37:26.445426   17996 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:37:26.456472   17996 logs.go:276] 1 containers: [24ed6391a78f]
	I0819 04:37:26.456491   17996 logs.go:123] Gathering logs for etcd [f455d8b0d489] ...
	I0819 04:37:26.456496   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f455d8b0d489"
	I0819 04:37:26.471189   17996 logs.go:123] Gathering logs for coredns [a7da7cc69ccc] ...
	I0819 04:37:26.471202   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7da7cc69ccc"
	I0819 04:37:26.483036   17996 logs.go:123] Gathering logs for kube-proxy [8a9318577d16] ...
	I0819 04:37:26.483048   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a9318577d16"
	I0819 04:37:26.495261   17996 logs.go:123] Gathering logs for coredns [475e9da6b182] ...
	I0819 04:37:26.495272   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 475e9da6b182"
	I0819 04:37:26.507032   17996 logs.go:123] Gathering logs for coredns [b382a541c256] ...
	I0819 04:37:26.507044   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b382a541c256"
	I0819 04:37:26.519290   17996 logs.go:123] Gathering logs for kube-scheduler [9a7de925ae0b] ...
	I0819 04:37:26.519301   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a7de925ae0b"
	I0819 04:37:26.534108   17996 logs.go:123] Gathering logs for kubelet ...
	I0819 04:37:26.534119   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:37:26.569499   17996 logs.go:123] Gathering logs for dmesg ...
	I0819 04:37:26.569511   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:37:26.575366   17996 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:37:26.575375   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:37:26.610943   17996 logs.go:123] Gathering logs for coredns [b3828a6a9ea6] ...
	I0819 04:37:26.610954   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3828a6a9ea6"
	I0819 04:37:26.622818   17996 logs.go:123] Gathering logs for kube-controller-manager [9aa4015a0f4e] ...
	I0819 04:37:26.622832   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9aa4015a0f4e"
	I0819 04:37:26.640658   17996 logs.go:123] Gathering logs for storage-provisioner [24ed6391a78f] ...
	I0819 04:37:26.640669   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24ed6391a78f"
	I0819 04:37:26.653406   17996 logs.go:123] Gathering logs for kube-apiserver [a87ae25c67c2] ...
	I0819 04:37:26.653418   17996 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a87ae25c67c2"
	I0819 04:37:26.668008   17996 logs.go:123] Gathering logs for Docker ...
	I0819 04:37:26.668021   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:37:26.691906   17996 logs.go:123] Gathering logs for container status ...
	I0819 04:37:26.691921   17996 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:37:29.207339   17996 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:30.670588   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:30.670612   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:34.209614   17996 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:34.213209   17996 out.go:201] 
	W0819 04:37:34.217071   17996 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 04:37:34.217080   17996 out.go:270] * 
	W0819 04:37:34.217680   17996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:37:34.228995   17996 out.go:201] 
	I0819 04:37:35.671081   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:35.671166   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:40.671780   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:40.671808   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:37:41.112404   18442 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:37:41.120728   18442 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:37:41.126632   18442 addons.go:510] duration metric: took 30.5818435s for enable addons: enabled=[storage-provisioner]
	I0819 04:37:45.672645   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:45.672713   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-19 11:28:42 UTC, ends at Mon 2024-08-19 11:37:50 UTC. --
	Aug 19 11:37:35 running-upgrade-038000 dockerd[3180]: time="2024-08-19T11:37:35.742253987Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/02992fbe61b60844efc3cfb36691494cc76b233da27f55465af0ec4d543fb3c1 pid=18817 runtime=io.containerd.runc.v2
	Aug 19 11:37:35 running-upgrade-038000 dockerd[3180]: time="2024-08-19T11:37:35.742605606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 19 11:37:35 running-upgrade-038000 dockerd[3180]: time="2024-08-19T11:37:35.742625564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 19 11:37:35 running-upgrade-038000 dockerd[3180]: time="2024-08-19T11:37:35.742630814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 11:37:35 running-upgrade-038000 dockerd[3180]: time="2024-08-19T11:37:35.742724229Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/85de84956346508c8ad9f7d4b9b62bfdf8061ae1ea71548cf39f022d89cf75a2 pid=18825 runtime=io.containerd.runc.v2
	Aug 19 11:37:35 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:35Z" level=error msg="ContainerStats resp: {0x4000595140 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x4000852300 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x40008d4280 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x40008d4680 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x4000853140 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x4000853500 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x4000853a00 linux}"
	Aug 19 11:37:36 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:36Z" level=error msg="ContainerStats resp: {0x4000852340 linux}"
	Aug 19 11:37:40 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 11:37:45 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 11:37:46 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:46Z" level=error msg="ContainerStats resp: {0x4000864600 linux}"
	Aug 19 11:37:46 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:46Z" level=error msg="ContainerStats resp: {0x4000865ec0 linux}"
	Aug 19 11:37:47 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:47Z" level=error msg="ContainerStats resp: {0x400099c280 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x400099de40 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x4000524480 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x40008ffb80 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x40008ffe80 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x400009c540 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x400009d4c0 linux}"
	Aug 19 11:37:48 running-upgrade-038000 cri-dockerd[3020]: time="2024-08-19T11:37:48Z" level=error msg="ContainerStats resp: {0x400080a280 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	02992fbe61b60       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   82e2286f84ec8
	85de849563465       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   033027453b561
	475e9da6b182f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   033027453b561
	b3828a6a9ea64       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   82e2286f84ec8
	24ed6391a78fd       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   faf1e34d77b64
	8a9318577d16e       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   60c39a34091da
	9a7de925ae0bd       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   1c1d49fc14e5a
	f455d8b0d4895       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   43f12d97f7750
	9aa4015a0f4e6       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8984dd3e7f52f
	a87ae25c67c27       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   5a67f3c604c49
	
	
	==> coredns [02992fbe61b6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4491641169010806275.6765632641999698422. HINFO: read udp 10.244.0.2:60513->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4491641169010806275.6765632641999698422. HINFO: read udp 10.244.0.2:47850->10.0.2.3:53: i/o timeout
	
	
	==> coredns [475e9da6b182] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:49484->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:41976->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:47380->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:41354->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:36678->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:39119->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:49666->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:49686->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:59152->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3396093517049119599.1284496541221094007. HINFO: read udp 10.244.0.3:55450->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [85de84956346] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6489045222005826824.8877439809771007887. HINFO: read udp 10.244.0.3:55605->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6489045222005826824.8877439809771007887. HINFO: read udp 10.244.0.3:43392->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b3828a6a9ea6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8184243988472427563.1733481782089692438. HINFO: read udp 10.244.0.2:49746->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8184243988472427563.1733481782089692438. HINFO: read udp 10.244.0.2:59011->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8184243988472427563.1733481782089692438. HINFO: read udp 10.244.0.2:37994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8184243988472427563.1733481782089692438. HINFO: read udp 10.244.0.2:42592->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8184243988472427563.1733481782089692438. HINFO: read udp 10.244.0.2:48907->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8184243988472427563.1733481782089692438. HINFO: read udp 10.244.0.2:52956->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-038000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-038000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6
	                    minikube.k8s.io/name=running-upgrade-038000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T04_33_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:33:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-038000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:37:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:33:33 +0000   Mon, 19 Aug 2024 11:33:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:33:33 +0000   Mon, 19 Aug 2024 11:33:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:33:33 +0000   Mon, 19 Aug 2024 11:33:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:33:33 +0000   Mon, 19 Aug 2024 11:33:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-038000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6b378ee7b154ae481eba7db977160bb
	  System UUID:                b6b378ee7b154ae481eba7db977160bb
	  Boot ID:                    392db10e-6cad-4ffa-8b1c-d4fac4e57398
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-f8v9m                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-lt89c                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-038000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-038000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-038000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-kpd9x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-038000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-038000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-038000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-038000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-038000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-038000 event: Registered Node running-upgrade-038000 in Controller
	
	
	==> dmesg <==
	[  +2.022618] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.080419] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.064870] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.141227] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.094098] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.074587] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.388920] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Aug19 11:29] systemd-fstab-generator[1927]: Ignoring "noauto" for root device
	[  +2.519060] systemd-fstab-generator[2212]: Ignoring "noauto" for root device
	[  +0.153942] systemd-fstab-generator[2246]: Ignoring "noauto" for root device
	[  +0.080397] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +0.095967] systemd-fstab-generator[2270]: Ignoring "noauto" for root device
	[  +2.678191] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.174401] systemd-fstab-generator[2977]: Ignoring "noauto" for root device
	[  +0.089336] systemd-fstab-generator[2988]: Ignoring "noauto" for root device
	[  +0.065559] systemd-fstab-generator[2999]: Ignoring "noauto" for root device
	[  +0.080434] systemd-fstab-generator[3013]: Ignoring "noauto" for root device
	[  +2.292271] systemd-fstab-generator[3166]: Ignoring "noauto" for root device
	[  +3.791546] systemd-fstab-generator[3552]: Ignoring "noauto" for root device
	[  +1.296699] systemd-fstab-generator[3886]: Ignoring "noauto" for root device
	[ +19.555689] kauditd_printk_skb: 68 callbacks suppressed
	[Aug19 11:30] kauditd_printk_skb: 21 callbacks suppressed
	[Aug19 11:33] systemd-fstab-generator[11894]: Ignoring "noauto" for root device
	[  +5.622188] systemd-fstab-generator[12482]: Ignoring "noauto" for root device
	[  +0.466126] systemd-fstab-generator[12613]: Ignoring "noauto" for root device
	
	
	==> etcd [f455d8b0d489] <==
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T11:33:28.749Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T11:33:29.425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-038000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T11:33:29.426Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T11:33:29.427Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T11:33:29.427Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T11:33:29.427Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 11:37:50 up 9 min,  0 users,  load average: 0.31, 0.33, 0.19
	Linux running-upgrade-038000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a87ae25c67c2] <==
	I0819 11:33:30.633561       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 11:33:30.633710       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 11:33:30.651265       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 11:33:30.651309       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 11:33:30.651451       1 cache.go:39] Caches are synced for autoregister controller
	I0819 11:33:30.651489       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 11:33:30.668315       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 11:33:31.380790       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 11:33:31.541798       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 11:33:31.545470       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 11:33:31.545504       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 11:33:31.690623       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 11:33:31.705223       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 11:33:31.791944       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0819 11:33:31.793752       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0819 11:33:31.794158       1 controller.go:611] quota admission added evaluator for: endpoints
	I0819 11:33:31.795742       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 11:33:32.677743       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 11:33:33.191239       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 11:33:33.199459       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0819 11:33:33.228740       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 11:33:33.248615       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 11:33:46.131055       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0819 11:33:46.380092       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0819 11:33:46.893350       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [9aa4015a0f4e] <==
	I0819 11:33:46.275951       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0819 11:33:46.315793       1 shared_informer.go:262] Caches are synced for resource quota
	W0819 11:33:46.364384       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="running-upgrade-038000" does not exist
	I0819 11:33:46.375834       1 shared_informer.go:262] Caches are synced for daemon sets
	I0819 11:33:46.376851       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0819 11:33:46.383111       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kpd9x"
	I0819 11:33:46.388245       1 shared_informer.go:262] Caches are synced for taint
	I0819 11:33:46.388577       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0819 11:33:46.388617       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0819 11:33:46.388802       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-038000. Assuming now as a timestamp.
	I0819 11:33:46.388876       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0819 11:33:46.388629       1 event.go:294] "Event occurred" object="running-upgrade-038000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-038000 event: Registered Node running-upgrade-038000 in Controller"
	I0819 11:33:46.390743       1 shared_informer.go:262] Caches are synced for persistent volume
	I0819 11:33:46.395194       1 shared_informer.go:262] Caches are synced for GC
	I0819 11:33:46.395389       1 shared_informer.go:262] Caches are synced for node
	I0819 11:33:46.395457       1 range_allocator.go:173] Starting range CIDR allocator
	I0819 11:33:46.395553       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0819 11:33:46.395565       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0819 11:33:46.396206       1 shared_informer.go:262] Caches are synced for attach detach
	I0819 11:33:46.402090       1 range_allocator.go:374] Set node running-upgrade-038000 PodCIDR to [10.244.0.0/24]
	I0819 11:33:46.404559       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 11:33:46.425854       1 shared_informer.go:262] Caches are synced for TTL
	I0819 11:33:46.827175       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 11:33:46.897992       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 11:33:46.898007       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [8a9318577d16] <==
	I0819 11:33:46.871977       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0819 11:33:46.871999       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0819 11:33:46.872008       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 11:33:46.889468       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 11:33:46.889478       1 server_others.go:206] "Using iptables Proxier"
	I0819 11:33:46.889502       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 11:33:46.889611       1 server.go:661] "Version info" version="v1.24.1"
	I0819 11:33:46.889615       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:33:46.891195       1 config.go:317] "Starting service config controller"
	I0819 11:33:46.891206       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 11:33:46.891217       1 config.go:226] "Starting endpoint slice config controller"
	I0819 11:33:46.891219       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 11:33:46.891771       1 config.go:444] "Starting node config controller"
	I0819 11:33:46.891799       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 11:33:46.992315       1 shared_informer.go:262] Caches are synced for node config
	I0819 11:33:46.992320       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0819 11:33:46.992329       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [9a7de925ae0b] <==
	W0819 11:33:30.586153       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:33:30.586182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0819 11:33:30.586140       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:33:30.586187       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 11:33:30.586217       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 11:33:30.586221       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0819 11:33:30.586232       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:33:30.586235       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0819 11:33:30.586247       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:33:30.586250       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0819 11:33:30.586260       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 11:33:30.586263       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0819 11:33:30.586273       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:33:30.586276       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0819 11:33:30.586104       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 11:33:30.586291       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0819 11:33:30.586338       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:33:30.586351       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0819 11:33:30.587118       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 11:33:30.587136       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0819 11:33:31.446161       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:33:31.446251       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0819 11:33:31.580622       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:33:31.580807       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0819 11:33:34.083288       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-19 11:28:42 UTC, ends at Mon 2024-08-19 11:37:50 UTC. --
	Aug 19 11:33:34 running-upgrade-038000 kubelet[12488]: I0819 11:33:34.235331   12488 apiserver.go:52] "Watching apiserver"
	Aug 19 11:33:34 running-upgrade-038000 kubelet[12488]: I0819 11:33:34.671738   12488 reconciler.go:157] "Reconciler: start to sync state"
	Aug 19 11:33:34 running-upgrade-038000 kubelet[12488]: E0819 11:33:34.824273   12488 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-038000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-038000"
	Aug 19 11:33:35 running-upgrade-038000 kubelet[12488]: E0819 11:33:35.023321   12488 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-038000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-038000"
	Aug 19 11:33:35 running-upgrade-038000 kubelet[12488]: E0819 11:33:35.223825   12488 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-038000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-038000"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.384088   12488 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.394408   12488 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488075   12488 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488122   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cb1e627-04c7-4915-a180-76006f01b310-kube-proxy\") pod \"kube-proxy-kpd9x\" (UID: \"9cb1e627-04c7-4915-a180-76006f01b310\") " pod="kube-system/kube-proxy-kpd9x"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488259   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cb1e627-04c7-4915-a180-76006f01b310-xtables-lock\") pod \"kube-proxy-kpd9x\" (UID: \"9cb1e627-04c7-4915-a180-76006f01b310\") " pod="kube-system/kube-proxy-kpd9x"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488275   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e22b8dde-1605-4d25-b915-81611301f864-tmp\") pod \"storage-provisioner\" (UID: \"e22b8dde-1605-4d25-b915-81611301f864\") " pod="kube-system/storage-provisioner"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488287   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrs9\" (UniqueName: \"kubernetes.io/projected/e22b8dde-1605-4d25-b915-81611301f864-kube-api-access-cvrs9\") pod \"storage-provisioner\" (UID: \"e22b8dde-1605-4d25-b915-81611301f864\") " pod="kube-system/storage-provisioner"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488297   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cb1e627-04c7-4915-a180-76006f01b310-lib-modules\") pod \"kube-proxy-kpd9x\" (UID: \"9cb1e627-04c7-4915-a180-76006f01b310\") " pod="kube-system/kube-proxy-kpd9x"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488308   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54qls\" (UniqueName: \"kubernetes.io/projected/9cb1e627-04c7-4915-a180-76006f01b310-kube-api-access-54qls\") pod \"kube-proxy-kpd9x\" (UID: \"9cb1e627-04c7-4915-a180-76006f01b310\") " pod="kube-system/kube-proxy-kpd9x"
	Aug 19 11:33:46 running-upgrade-038000 kubelet[12488]: I0819 11:33:46.488569   12488 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 19 11:33:47 running-upgrade-038000 kubelet[12488]: I0819 11:33:47.599864   12488 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:33:47 running-upgrade-038000 kubelet[12488]: I0819 11:33:47.602581   12488 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 11:33:47 running-upgrade-038000 kubelet[12488]: I0819 11:33:47.700540   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e597544-1683-4cbe-8893-0b6050d73e93-config-volume\") pod \"coredns-6d4b75cb6d-f8v9m\" (UID: \"2e597544-1683-4cbe-8893-0b6050d73e93\") " pod="kube-system/coredns-6d4b75cb6d-f8v9m"
	Aug 19 11:33:47 running-upgrade-038000 kubelet[12488]: I0819 11:33:47.700567   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2qk\" (UniqueName: \"kubernetes.io/projected/2e597544-1683-4cbe-8893-0b6050d73e93-kube-api-access-pf2qk\") pod \"coredns-6d4b75cb6d-f8v9m\" (UID: \"2e597544-1683-4cbe-8893-0b6050d73e93\") " pod="kube-system/coredns-6d4b75cb6d-f8v9m"
	Aug 19 11:33:47 running-upgrade-038000 kubelet[12488]: I0819 11:33:47.700579   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p46wz\" (UniqueName: \"kubernetes.io/projected/688be820-2985-4ff0-bdd1-d3cb4edf699d-kube-api-access-p46wz\") pod \"coredns-6d4b75cb6d-lt89c\" (UID: \"688be820-2985-4ff0-bdd1-d3cb4edf699d\") " pod="kube-system/coredns-6d4b75cb6d-lt89c"
	Aug 19 11:33:47 running-upgrade-038000 kubelet[12488]: I0819 11:33:47.700590   12488 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/688be820-2985-4ff0-bdd1-d3cb4edf699d-config-volume\") pod \"coredns-6d4b75cb6d-lt89c\" (UID: \"688be820-2985-4ff0-bdd1-d3cb4edf699d\") " pod="kube-system/coredns-6d4b75cb6d-lt89c"
	Aug 19 11:33:48 running-upgrade-038000 kubelet[12488]: I0819 11:33:48.423947   12488 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="033027453b56174757eb24b0bb3f58d58da12011176953207005de481f53356c"
	Aug 19 11:33:48 running-upgrade-038000 kubelet[12488]: I0819 11:33:48.426174   12488 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="82e2286f84ec8126443eb7ec2b815bca19c3d0013b01f86893fdca4271b138a9"
	Aug 19 11:37:35 running-upgrade-038000 kubelet[12488]: I0819 11:37:35.727773   12488 scope.go:110] "RemoveContainer" containerID="a7da7cc69cccdde5249446e016d54b37be890c17ba9af8c6c88b905775597978"
	Aug 19 11:37:35 running-upgrade-038000 kubelet[12488]: I0819 11:37:35.785898   12488 scope.go:110] "RemoveContainer" containerID="b382a541c256a4316b796e930b4dd53b4f4905996485eb774e39636a37146b64"
	
	
	==> storage-provisioner [24ed6391a78f] <==
	I0819 11:33:46.896117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:33:46.901110       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:33:46.901198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:33:46.905421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:33:46.905770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c1dc4a2-22e8-40c5-b79f-6c657a238299", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-038000_aa333b73-8f87-4a03-8156-863a7a40b1d0 became leader
	I0819 11:33:46.905794       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-038000_aa333b73-8f87-4a03-8156-863a7a40b1d0!
	I0819 11:33:47.006706       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-038000_aa333b73-8f87-4a03-8156-863a7a40b1d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-038000 -n running-upgrade-038000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-038000 -n running-upgrade-038000: exit status 2 (15.641288083s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-038000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-038000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-038000
--- FAIL: TestRunningBinaryUpgrade (590.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-241000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-241000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.908435583s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-241000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-241000" primary control-plane node in "kubernetes-upgrade-241000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-241000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:31:16.540994   18354 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:31:16.541119   18354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:31:16.541123   18354 out.go:358] Setting ErrFile to fd 2...
	I0819 04:31:16.541125   18354 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:31:16.541250   18354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:31:16.542329   18354 out.go:352] Setting JSON to false
	I0819 04:31:16.558407   18354 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9044,"bootTime":1724058032,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:31:16.558471   18354 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:31:16.564333   18354 out.go:177] * [kubernetes-upgrade-241000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:31:16.571356   18354 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:31:16.571450   18354 notify.go:220] Checking for updates...
	I0819 04:31:16.577251   18354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:31:16.580374   18354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:31:16.583212   18354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:31:16.586218   18354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:31:16.589279   18354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:31:16.592556   18354 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:31:16.592623   18354 config.go:182] Loaded profile config "running-upgrade-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:31:16.592675   18354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:31:16.597254   18354 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:31:16.603256   18354 start.go:297] selected driver: qemu2
	I0819 04:31:16.603263   18354 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:31:16.603269   18354 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:31:16.605306   18354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:31:16.608221   18354 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:31:16.611387   18354 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:31:16.611409   18354 cni.go:84] Creating CNI manager for ""
	I0819 04:31:16.611419   18354 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:31:16.611460   18354 start.go:340] cluster config:
	{Name:kubernetes-upgrade-241000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:31:16.614779   18354 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:31:16.622304   18354 out.go:177] * Starting "kubernetes-upgrade-241000" primary control-plane node in "kubernetes-upgrade-241000" cluster
	I0819 04:31:16.626300   18354 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:31:16.626314   18354 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:31:16.626323   18354 cache.go:56] Caching tarball of preloaded images
	I0819 04:31:16.626386   18354 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:31:16.626391   18354 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:31:16.626450   18354 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kubernetes-upgrade-241000/config.json ...
	I0819 04:31:16.626460   18354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kubernetes-upgrade-241000/config.json: {Name:mk57941d85fdccafde72a23b9c20b2534e7f8673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:31:16.626766   18354 start.go:360] acquireMachinesLock for kubernetes-upgrade-241000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:31:16.626798   18354 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "kubernetes-upgrade-241000"
	I0819 04:31:16.626811   18354 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-241000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:31:16.626833   18354 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:31:16.635282   18354 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:31:16.660595   18354 start.go:159] libmachine.API.Create for "kubernetes-upgrade-241000" (driver="qemu2")
	I0819 04:31:16.660626   18354 client.go:168] LocalClient.Create starting
	I0819 04:31:16.660718   18354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:31:16.660747   18354 main.go:141] libmachine: Decoding PEM data...
	I0819 04:31:16.660770   18354 main.go:141] libmachine: Parsing certificate...
	I0819 04:31:16.660806   18354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:31:16.660831   18354 main.go:141] libmachine: Decoding PEM data...
	I0819 04:31:16.660839   18354 main.go:141] libmachine: Parsing certificate...
	I0819 04:31:16.661204   18354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:31:16.812153   18354 main.go:141] libmachine: Creating SSH key...
	I0819 04:31:17.018955   18354 main.go:141] libmachine: Creating Disk image...
	I0819 04:31:17.018965   18354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:31:17.019221   18354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:17.035196   18354 main.go:141] libmachine: STDOUT: 
	I0819 04:31:17.035218   18354 main.go:141] libmachine: STDERR: 
	I0819 04:31:17.035273   18354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2 +20000M
	I0819 04:31:17.043364   18354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:31:17.043380   18354 main.go:141] libmachine: STDERR: 
	I0819 04:31:17.043396   18354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:17.043408   18354 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:31:17.043420   18354 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:31:17.043453   18354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:12:70:be:96:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:17.044993   18354 main.go:141] libmachine: STDOUT: 
	I0819 04:31:17.045008   18354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:31:17.045024   18354 client.go:171] duration metric: took 384.401583ms to LocalClient.Create
	I0819 04:31:19.047062   18354 start.go:128] duration metric: took 2.42027825s to createHost
	I0819 04:31:19.047085   18354 start.go:83] releasing machines lock for "kubernetes-upgrade-241000", held for 2.420337084s
	W0819 04:31:19.047100   18354 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:31:19.055577   18354 out.go:177] * Deleting "kubernetes-upgrade-241000" in qemu2 ...
	W0819 04:31:19.065943   18354 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:31:19.065955   18354 start.go:729] Will try again in 5 seconds ...
	I0819 04:31:24.066529   18354 start.go:360] acquireMachinesLock for kubernetes-upgrade-241000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:31:24.066765   18354 start.go:364] duration metric: took 193.958µs to acquireMachinesLock for "kubernetes-upgrade-241000"
	I0819 04:31:24.066795   18354 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-241000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:31:24.066853   18354 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:31:24.076109   18354 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:31:24.100544   18354 start.go:159] libmachine.API.Create for "kubernetes-upgrade-241000" (driver="qemu2")
	I0819 04:31:24.100572   18354 client.go:168] LocalClient.Create starting
	I0819 04:31:24.100652   18354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:31:24.100704   18354 main.go:141] libmachine: Decoding PEM data...
	I0819 04:31:24.100716   18354 main.go:141] libmachine: Parsing certificate...
	I0819 04:31:24.100763   18354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:31:24.100792   18354 main.go:141] libmachine: Decoding PEM data...
	I0819 04:31:24.100801   18354 main.go:141] libmachine: Parsing certificate...
	I0819 04:31:24.101149   18354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:31:24.252840   18354 main.go:141] libmachine: Creating SSH key...
	I0819 04:31:24.358913   18354 main.go:141] libmachine: Creating Disk image...
	I0819 04:31:24.358921   18354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:31:24.359139   18354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:24.368329   18354 main.go:141] libmachine: STDOUT: 
	I0819 04:31:24.368350   18354 main.go:141] libmachine: STDERR: 
	I0819 04:31:24.368398   18354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2 +20000M
	I0819 04:31:24.377005   18354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:31:24.377023   18354 main.go:141] libmachine: STDERR: 
	I0819 04:31:24.377042   18354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:24.377048   18354 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:31:24.377060   18354 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:31:24.377093   18354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:53:da:9f:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:24.378960   18354 main.go:141] libmachine: STDOUT: 
	I0819 04:31:24.378976   18354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:31:24.378988   18354 client.go:171] duration metric: took 278.418958ms to LocalClient.Create
	I0819 04:31:26.381132   18354 start.go:128] duration metric: took 2.314286792s to createHost
	I0819 04:31:26.381239   18354 start.go:83] releasing machines lock for "kubernetes-upgrade-241000", held for 2.314513375s
	W0819 04:31:26.381637   18354 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-241000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-241000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:31:26.390273   18354 out.go:201] 
	W0819 04:31:26.396350   18354 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:31:26.396404   18354 out.go:270] * 
	* 
	W0819 04:31:26.398214   18354 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:31:26.409278   18354 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-241000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-241000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-241000: (3.34208525s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-241000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-241000 status --format={{.Host}}: exit status 7 (61.068334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-241000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-241000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.168330667s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-241000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-241000" primary control-plane node in "kubernetes-upgrade-241000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-241000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-241000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:31:29.858074   18389 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:31:29.858200   18389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:31:29.858204   18389 out.go:358] Setting ErrFile to fd 2...
	I0819 04:31:29.858206   18389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:31:29.858343   18389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:31:29.859393   18389 out.go:352] Setting JSON to false
	I0819 04:31:29.875526   18389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9057,"bootTime":1724058032,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:31:29.875594   18389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:31:29.880259   18389 out.go:177] * [kubernetes-upgrade-241000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:31:29.887443   18389 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:31:29.887487   18389 notify.go:220] Checking for updates...
	I0819 04:31:29.894370   18389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:31:29.897404   18389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:31:29.900419   18389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:31:29.903405   18389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:31:29.906412   18389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:31:29.907942   18389 config.go:182] Loaded profile config "kubernetes-upgrade-241000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 04:31:29.908199   18389 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:31:29.912333   18389 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:31:29.919206   18389 start.go:297] selected driver: qemu2
	I0819 04:31:29.919213   18389 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-241000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:31:29.919274   18389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:31:29.921476   18389 cni.go:84] Creating CNI manager for ""
	I0819 04:31:29.921493   18389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:31:29.921517   18389 start.go:340] cluster config:
	{Name:kubernetes-upgrade-241000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-241000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:31:29.924835   18389 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:31:29.933357   18389 out.go:177] * Starting "kubernetes-upgrade-241000" primary control-plane node in "kubernetes-upgrade-241000" cluster
	I0819 04:31:29.937319   18389 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:31:29.937331   18389 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:31:29.937340   18389 cache.go:56] Caching tarball of preloaded images
	I0819 04:31:29.937393   18389 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:31:29.937398   18389 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:31:29.937450   18389 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kubernetes-upgrade-241000/config.json ...
	I0819 04:31:29.937911   18389 start.go:360] acquireMachinesLock for kubernetes-upgrade-241000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:31:29.937938   18389 start.go:364] duration metric: took 21.166µs to acquireMachinesLock for "kubernetes-upgrade-241000"
	I0819 04:31:29.937947   18389 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:31:29.937953   18389 fix.go:54] fixHost starting: 
	I0819 04:31:29.938062   18389 fix.go:112] recreateIfNeeded on kubernetes-upgrade-241000: state=Stopped err=<nil>
	W0819 04:31:29.938069   18389 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:31:29.942436   18389 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-241000" ...
	I0819 04:31:29.950393   18389 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:31:29.950429   18389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:53:da:9f:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:29.952207   18389 main.go:141] libmachine: STDOUT: 
	I0819 04:31:29.952229   18389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:31:29.952254   18389 fix.go:56] duration metric: took 14.302667ms for fixHost
	I0819 04:31:29.952259   18389 start.go:83] releasing machines lock for "kubernetes-upgrade-241000", held for 14.317583ms
	W0819 04:31:29.952264   18389 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:31:29.952308   18389 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:31:29.952312   18389 start.go:729] Will try again in 5 seconds ...
	I0819 04:31:34.954254   18389 start.go:360] acquireMachinesLock for kubernetes-upgrade-241000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:31:34.954355   18389 start.go:364] duration metric: took 84.208µs to acquireMachinesLock for "kubernetes-upgrade-241000"
	I0819 04:31:34.954372   18389 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:31:34.954384   18389 fix.go:54] fixHost starting: 
	I0819 04:31:34.954530   18389 fix.go:112] recreateIfNeeded on kubernetes-upgrade-241000: state=Stopped err=<nil>
	W0819 04:31:34.954535   18389 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:31:34.958720   18389 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-241000" ...
	I0819 04:31:34.964626   18389 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:31:34.964669   18389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:53:da:9f:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubernetes-upgrade-241000/disk.qcow2
	I0819 04:31:34.966840   18389 main.go:141] libmachine: STDOUT: 
	I0819 04:31:34.966868   18389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:31:34.966894   18389 fix.go:56] duration metric: took 12.517917ms for fixHost
	I0819 04:31:34.966898   18389 start.go:83] releasing machines lock for "kubernetes-upgrade-241000", held for 12.537167ms
	W0819 04:31:34.966954   18389 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-241000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-241000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:31:34.972661   18389 out.go:201] 
	W0819 04:31:34.976692   18389 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:31:34.976700   18389 out.go:270] * 
	* 
	W0819 04:31:34.977221   18389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:31:34.987687   18389 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-241000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-241000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-241000 version --output=json: exit status 1 (29.444375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-241000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-19 04:31:35.025725 -0700 PDT m=+943.345747376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-241000 -n kubernetes-upgrade-241000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-241000 -n kubernetes-upgrade-241000: exit status 7 (30.691041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-241000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-241000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-241000
--- FAIL: TestKubernetesUpgrade (18.62s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.35s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19479
- KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current451072811/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.64s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19479
- KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3264118311/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.653226279 start -p stopped-upgrade-783000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.653226279 start -p stopped-upgrade-783000 --memory=2200 --vm-driver=qemu2 : (40.164619875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.653226279 -p stopped-upgrade-783000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.653226279 -p stopped-upgrade-783000 stop: (12.108052917s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-783000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-783000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.992454791s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-783000" primary control-plane node in "stopped-upgrade-783000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-783000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:32:28.762322   18442 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:32:28.762472   18442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:32:28.762478   18442 out.go:358] Setting ErrFile to fd 2...
	I0819 04:32:28.762481   18442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:32:28.762660   18442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:32:28.763974   18442 out.go:352] Setting JSON to false
	I0819 04:32:28.783445   18442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9116,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:32:28.783525   18442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:32:28.788930   18442 out.go:177] * [stopped-upgrade-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:32:28.796869   18442 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:32:28.796900   18442 notify.go:220] Checking for updates...
	I0819 04:32:28.804817   18442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:32:28.807865   18442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:32:28.810950   18442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:32:28.813900   18442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:32:28.816906   18442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:32:28.820131   18442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:32:28.823830   18442 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:32:28.826876   18442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:32:28.830792   18442 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:32:28.837855   18442 start.go:297] selected driver: qemu2
	I0819 04:32:28.837860   18442 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:32:28.837911   18442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:32:28.840668   18442 cni.go:84] Creating CNI manager for ""
	I0819 04:32:28.840689   18442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:32:28.840723   18442 start.go:340] cluster config:
	{Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:32:28.840779   18442 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:32:28.847862   18442 out.go:177] * Starting "stopped-upgrade-783000" primary control-plane node in "stopped-upgrade-783000" cluster
	I0819 04:32:28.851684   18442 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:32:28.851699   18442 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 04:32:28.851705   18442 cache.go:56] Caching tarball of preloaded images
	I0819 04:32:28.851758   18442 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:32:28.851772   18442 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 04:32:28.851826   18442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/config.json ...
	I0819 04:32:28.852255   18442 start.go:360] acquireMachinesLock for stopped-upgrade-783000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:32:28.852288   18442 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "stopped-upgrade-783000"
	I0819 04:32:28.852298   18442 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:32:28.852303   18442 fix.go:54] fixHost starting: 
	I0819 04:32:28.852414   18442 fix.go:112] recreateIfNeeded on stopped-upgrade-783000: state=Stopped err=<nil>
	W0819 04:32:28.852422   18442 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:32:28.856937   18442 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-783000" ...
	I0819 04:32:28.864859   18442 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:32:28.864954   18442 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53385-:22,hostfwd=tcp::53386-:2376,hostname=stopped-upgrade-783000 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/disk.qcow2
	I0819 04:32:28.911214   18442 main.go:141] libmachine: STDOUT: 
	I0819 04:32:28.911257   18442 main.go:141] libmachine: STDERR: 
	I0819 04:32:28.911264   18442 main.go:141] libmachine: Waiting for VM to start (ssh -p 53385 docker@127.0.0.1)...
	I0819 04:32:49.091104   18442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/config.json ...
	I0819 04:32:49.091952   18442 machine.go:93] provisionDockerMachine start ...
	I0819 04:32:49.092144   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.092746   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.092762   18442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 04:32:49.183271   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 04:32:49.183307   18442 buildroot.go:166] provisioning hostname "stopped-upgrade-783000"
	I0819 04:32:49.183444   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.183638   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.183646   18442 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-783000 && echo "stopped-upgrade-783000" | sudo tee /etc/hostname
	I0819 04:32:49.262915   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-783000
	
	I0819 04:32:49.262978   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.263113   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.263122   18442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-783000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-783000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-783000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 04:32:49.333341   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 04:32:49.333354   18442 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19479-15750/.minikube CaCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19479-15750/.minikube}
	I0819 04:32:49.333372   18442 buildroot.go:174] setting up certificates
	I0819 04:32:49.333381   18442 provision.go:84] configureAuth start
	I0819 04:32:49.333386   18442 provision.go:143] copyHostCerts
	I0819 04:32:49.333473   18442 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem, removing ...
	I0819 04:32:49.333484   18442 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem
	I0819 04:32:49.333601   18442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.pem (1082 bytes)
	I0819 04:32:49.333807   18442 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem, removing ...
	I0819 04:32:49.333812   18442 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem
	I0819 04:32:49.333876   18442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/cert.pem (1123 bytes)
	I0819 04:32:49.334009   18442 exec_runner.go:144] found /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem, removing ...
	I0819 04:32:49.334014   18442 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem
	I0819 04:32:49.334068   18442 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19479-15750/.minikube/key.pem (1675 bytes)
	I0819 04:32:49.334169   18442 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-783000 san=[127.0.0.1 localhost minikube stopped-upgrade-783000]
	I0819 04:32:49.521562   18442 provision.go:177] copyRemoteCerts
	I0819 04:32:49.521617   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 04:32:49.521630   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:32:49.557389   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 04:32:49.564590   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 04:32:49.571847   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 04:32:49.578556   18442 provision.go:87] duration metric: took 245.176ms to configureAuth
	I0819 04:32:49.578565   18442 buildroot.go:189] setting minikube options for container-runtime
	I0819 04:32:49.578689   18442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:32:49.578732   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.578827   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.578831   18442 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 04:32:49.646485   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 04:32:49.646499   18442 buildroot.go:70] root file system type: tmpfs
	I0819 04:32:49.646547   18442 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 04:32:49.646610   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.646733   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.646767   18442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 04:32:49.713775   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 04:32:49.713820   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:49.713928   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:49.713936   18442 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 04:32:50.089776   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 04:32:50.089790   18442 machine.go:96] duration metric: took 997.849584ms to provisionDockerMachine
	I0819 04:32:50.089797   18442 start.go:293] postStartSetup for "stopped-upgrade-783000" (driver="qemu2")
	I0819 04:32:50.089803   18442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 04:32:50.089873   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 04:32:50.089883   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:32:50.124418   18442 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 04:32:50.125649   18442 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 04:32:50.125658   18442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19479-15750/.minikube/addons for local assets ...
	I0819 04:32:50.125744   18442 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19479-15750/.minikube/files for local assets ...
	I0819 04:32:50.125879   18442 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem -> 162402.pem in /etc/ssl/certs
	I0819 04:32:50.126012   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 04:32:50.128876   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem --> /etc/ssl/certs/162402.pem (1708 bytes)
	I0819 04:32:50.135382   18442 start.go:296] duration metric: took 45.58175ms for postStartSetup
	I0819 04:32:50.135394   18442 fix.go:56] duration metric: took 21.283575125s for fixHost
	I0819 04:32:50.135428   18442 main.go:141] libmachine: Using SSH client type: native
	I0819 04:32:50.135526   18442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c045a0] 0x100c06e00 <nil>  [] 0s} localhost 53385 <nil> <nil>}
	I0819 04:32:50.135535   18442 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 04:32:50.202164   18442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724067170.492677546
	
	I0819 04:32:50.202173   18442 fix.go:216] guest clock: 1724067170.492677546
	I0819 04:32:50.202178   18442 fix.go:229] Guest: 2024-08-19 04:32:50.492677546 -0700 PDT Remote: 2024-08-19 04:32:50.135396 -0700 PDT m=+21.407565126 (delta=357.281546ms)
	I0819 04:32:50.202191   18442 fix.go:200] guest clock delta is within tolerance: 357.281546ms
	I0819 04:32:50.202194   18442 start.go:83] releasing machines lock for "stopped-upgrade-783000", held for 21.350385916s
	I0819 04:32:50.202263   18442 ssh_runner.go:195] Run: cat /version.json
	I0819 04:32:50.202274   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:32:50.202263   18442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 04:32:50.202315   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	W0819 04:32:50.202914   18442 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53385: connect: connection refused
	I0819 04:32:50.202936   18442 retry.go:31] will retry after 301.263493ms: dial tcp [::1]:53385: connect: connection refused
	W0819 04:32:50.561695   18442 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 04:32:50.561902   18442 ssh_runner.go:195] Run: systemctl --version
	I0819 04:32:50.566679   18442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 04:32:50.570593   18442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 04:32:50.570650   18442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 04:32:50.577241   18442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 04:32:50.586098   18442 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 04:32:50.586126   18442 start.go:495] detecting cgroup driver to use...
	I0819 04:32:50.586265   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:32:50.597383   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 04:32:50.602124   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 04:32:50.605998   18442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 04:32:50.606032   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 04:32:50.610018   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:32:50.613615   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 04:32:50.617225   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 04:32:50.620699   18442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 04:32:50.624236   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 04:32:50.627606   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 04:32:50.630371   18442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 04:32:50.633396   18442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 04:32:50.636657   18442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 04:32:50.639653   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:50.715749   18442 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 04:32:50.722875   18442 start.go:495] detecting cgroup driver to use...
	I0819 04:32:50.722953   18442 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 04:32:50.727925   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:32:50.733001   18442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 04:32:50.741295   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 04:32:50.746319   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:32:50.751037   18442 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 04:32:50.793185   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 04:32:50.798128   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 04:32:50.803214   18442 ssh_runner.go:195] Run: which cri-dockerd
	I0819 04:32:50.804524   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 04:32:50.807363   18442 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 04:32:50.812235   18442 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 04:32:50.902553   18442 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 04:32:50.980197   18442 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 04:32:50.980257   18442 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 04:32:50.985670   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:51.063709   18442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:32:52.220291   18442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.156593208s)
	I0819 04:32:52.220362   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 04:32:52.224696   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:32:52.229280   18442 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 04:32:52.308145   18442 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 04:32:52.384708   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:52.463045   18442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 04:32:52.468572   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 04:32:52.472969   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:52.552146   18442 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 04:32:52.589309   18442 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 04:32:52.589388   18442 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 04:32:52.591705   18442 start.go:563] Will wait 60s for crictl version
	I0819 04:32:52.591763   18442 ssh_runner.go:195] Run: which crictl
	I0819 04:32:52.593222   18442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 04:32:52.608935   18442 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 04:32:52.609003   18442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:32:52.626157   18442 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 04:32:52.644728   18442 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 04:32:52.644852   18442 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 04:32:52.646294   18442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:32:52.649929   18442 kubeadm.go:883] updating cluster {Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 04:32:52.649974   18442 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 04:32:52.650015   18442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:32:52.663145   18442 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:32:52.663155   18442 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:32:52.663199   18442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:32:52.666156   18442 ssh_runner.go:195] Run: which lz4
	I0819 04:32:52.667508   18442 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 04:32:52.668785   18442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 04:32:52.668795   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 04:32:53.600917   18442 docker.go:649] duration metric: took 933.466917ms to copy over tarball
	I0819 04:32:53.600974   18442 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 04:32:54.774479   18442 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.173515375s)
	I0819 04:32:54.774494   18442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 04:32:54.790059   18442 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 04:32:54.793394   18442 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 04:32:54.798748   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:54.877549   18442 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 04:32:56.567891   18442 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.690334625s)
	I0819 04:32:56.568014   18442 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 04:32:56.579328   18442 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 04:32:56.579337   18442 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 04:32:56.579342   18442 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 04:32:56.583813   18442 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:56.585810   18442 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:56.587815   18442 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:56.587871   18442 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:56.590222   18442 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:56.590267   18442 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:56.591546   18442 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:56.592248   18442 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:56.594792   18442 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:56.594791   18442 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 04:32:56.594963   18442 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:56.596752   18442 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:56.596839   18442 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:56.597800   18442 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 04:32:56.597827   18442 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:56.598722   18442 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:57.046007   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:57.049504   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 04:32:57.050792   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:57.051017   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:57.066370   18442 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 04:32:57.066400   18442 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:57.066493   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 04:32:57.072207   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:57.085041   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0819 04:32:57.090972   18442 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 04:32:57.091086   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:57.093260   18442 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 04:32:57.093279   18442 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 04:32:57.093285   18442 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 04:32:57.093295   18442 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:57.093321   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 04:32:57.093321   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 04:32:57.093394   18442 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 04:32:57.093407   18442 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:57.093430   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 04:32:57.111706   18442 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 04:32:57.111732   18442 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:57.111791   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 04:32:57.111866   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 04:32:57.138827   18442 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 04:32:57.138844   18442 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 04:32:57.138851   18442 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:57.138854   18442 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:57.138904   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 04:32:57.138904   18442 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 04:32:57.139767   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 04:32:57.139792   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 04:32:57.139838   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 04:32:57.139870   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 04:32:57.139870   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 04:32:57.154687   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 04:32:57.158483   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 04:32:57.158493   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 04:32:57.158507   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 04:32:57.158518   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 04:32:57.158545   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 04:32:57.158557   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 04:32:57.158595   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:32:57.178441   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 04:32:57.178473   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0819 04:32:57.190850   18442 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 04:32:57.190972   18442 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:57.202487   18442 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 04:32:57.202501   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 04:32:57.282977   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0819 04:32:57.282998   18442 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 04:32:57.283004   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 04:32:57.283016   18442 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 04:32:57.283034   18442 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:57.283086   18442 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:32:57.324393   18442 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 04:32:57.324540   18442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:32:57.404333   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 04:32:57.404326   18442 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 04:32:57.404372   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 04:32:57.475477   18442 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 04:32:57.475493   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 04:32:57.784928   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 04:32:57.784952   18442 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 04:32:57.784962   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 04:32:57.918931   18442 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 04:32:57.918973   18442 cache_images.go:92] duration metric: took 1.339655041s to LoadCachedImages
	W0819 04:32:57.919017   18442 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0819 04:32:57.919022   18442 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 04:32:57.919076   18442 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-783000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 04:32:57.919146   18442 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 04:32:57.933407   18442 cni.go:84] Creating CNI manager for ""
	I0819 04:32:57.933421   18442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:32:57.933425   18442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 04:32:57.933433   18442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-783000 NodeName:stopped-upgrade-783000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 04:32:57.933498   18442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-783000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 04:32:57.933554   18442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 04:32:57.936714   18442 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 04:32:57.936744   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 04:32:57.939619   18442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 04:32:57.944539   18442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 04:32:57.949555   18442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 04:32:57.954943   18442 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 04:32:57.956182   18442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 04:32:57.959862   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:32:58.036406   18442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:32:58.042574   18442 certs.go:68] Setting up /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000 for IP: 10.0.2.15
	I0819 04:32:58.042584   18442 certs.go:194] generating shared ca certs ...
	I0819 04:32:58.042593   18442 certs.go:226] acquiring lock for ca certs: {Name:mk35a9cd01f436a7a54821e5f775d6ab16b5867a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.042769   18442 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.key
	I0819 04:32:58.042822   18442 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.key
	I0819 04:32:58.042827   18442 certs.go:256] generating profile certs ...
	I0819 04:32:58.042922   18442 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.key
	I0819 04:32:58.042962   18442 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25
	I0819 04:32:58.042974   18442 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 04:32:58.229792   18442 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25 ...
	I0819 04:32:58.229805   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25: {Name:mk2fee211061dd1b14760780f701508148afe02f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.230885   18442 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25 ...
	I0819 04:32:58.230895   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25: {Name:mkd2735e8538c030d9a2b9c87f6dcf8ff54b0762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.231056   18442 certs.go:381] copying /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt.bab2fd25 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt
	I0819 04:32:58.231220   18442 certs.go:385] copying /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key.bab2fd25 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key
	I0819 04:32:58.231367   18442 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/proxy-client.key
	I0819 04:32:58.231514   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240.pem (1338 bytes)
	W0819 04:32:58.231542   18442 certs.go:480] ignoring /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240_empty.pem, impossibly tiny 0 bytes
	I0819 04:32:58.231550   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 04:32:58.231570   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem (1082 bytes)
	I0819 04:32:58.231593   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem (1123 bytes)
	I0819 04:32:58.231612   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/key.pem (1675 bytes)
	I0819 04:32:58.231652   18442 certs.go:484] found cert: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem (1708 bytes)
	I0819 04:32:58.231999   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 04:32:58.239533   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 04:32:58.246630   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 04:32:58.253185   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 04:32:58.260552   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 04:32:58.268256   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 04:32:58.275134   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 04:32:58.281857   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 04:32:58.289130   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/ssl/certs/162402.pem --> /usr/share/ca-certificates/162402.pem (1708 bytes)
	I0819 04:32:58.296119   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 04:32:58.302851   18442 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/16240.pem --> /usr/share/ca-certificates/16240.pem (1338 bytes)
	I0819 04:32:58.309323   18442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 04:32:58.314458   18442 ssh_runner.go:195] Run: openssl version
	I0819 04:32:58.316225   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162402.pem && ln -fs /usr/share/ca-certificates/162402.pem /etc/ssl/certs/162402.pem"
	I0819 04:32:58.319124   18442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162402.pem
	I0819 04:32:58.320416   18442 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:16 /usr/share/ca-certificates/162402.pem
	I0819 04:32:58.320439   18442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162402.pem
	I0819 04:32:58.322203   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162402.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 04:32:58.325343   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 04:32:58.328738   18442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:32:58.330546   18442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:32:58.330566   18442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 04:32:58.332353   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 04:32:58.335676   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16240.pem && ln -fs /usr/share/ca-certificates/16240.pem /etc/ssl/certs/16240.pem"
	I0819 04:32:58.338515   18442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16240.pem
	I0819 04:32:58.339907   18442 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:16 /usr/share/ca-certificates/16240.pem
	I0819 04:32:58.339927   18442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16240.pem
	I0819 04:32:58.341751   18442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16240.pem /etc/ssl/certs/51391683.0"
	I0819 04:32:58.345095   18442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 04:32:58.346605   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 04:32:58.348601   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 04:32:58.350654   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 04:32:58.352789   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 04:32:58.354887   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 04:32:58.356669   18442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 04:32:58.358541   18442 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53420 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 04:32:58.358604   18442 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:32:58.370481   18442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 04:32:58.374193   18442 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 04:32:58.374204   18442 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 04:32:58.374249   18442 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 04:32:58.377596   18442 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 04:32:58.377910   18442 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-783000" does not appear in /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:32:58.378009   18442 kubeconfig.go:62] /Users/jenkins/minikube-integration/19479-15750/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-783000" cluster setting kubeconfig missing "stopped-upgrade-783000" context setting]
	I0819 04:32:58.378204   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:32:58.378652   18442 kapi.go:59] client config for stopped-upgrade-783000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021bd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:32:58.378988   18442 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 04:32:58.381658   18442 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-783000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 04:32:58.381664   18442 kubeadm.go:1160] stopping kube-system containers ...
	I0819 04:32:58.381703   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 04:32:58.392171   18442 docker.go:483] Stopping containers: [069748194d02 3985c1d649a7 f269961c577e 390cd57e246c 3a9b46914d25 235331fd2fc2 10fabcb359f6 534015cf45e4]
	I0819 04:32:58.392234   18442 ssh_runner.go:195] Run: docker stop 069748194d02 3985c1d649a7 f269961c577e 390cd57e246c 3a9b46914d25 235331fd2fc2 10fabcb359f6 534015cf45e4
	I0819 04:32:58.402666   18442 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 04:32:58.408232   18442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:32:58.411154   18442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:32:58.411160   18442 kubeadm.go:157] found existing configuration files:
	
	I0819 04:32:58.411184   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf
	I0819 04:32:58.413593   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:32:58.413618   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:32:58.416667   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf
	I0819 04:32:58.419675   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:32:58.419697   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:32:58.422081   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf
	I0819 04:32:58.425018   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:32:58.425043   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:32:58.428229   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf
	I0819 04:32:58.431166   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:32:58.431197   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:32:58.433717   18442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:32:58.436965   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.460269   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.826515   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.957907   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.978663   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 04:32:58.997368   18442 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:32:58.997448   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:32:59.499780   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:32:59.997749   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:33:00.002112   18442 api_server.go:72] duration metric: took 1.004767583s to wait for apiserver process to appear ...
	I0819 04:33:00.002124   18442 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:33:00.002137   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:05.004155   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:05.004215   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:10.004296   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:10.004333   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:15.004608   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:15.004676   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:20.005405   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:20.005462   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:25.006771   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:25.006862   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:30.007956   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:30.007982   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:35.009217   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:35.009259   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:40.010846   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:40.010889   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:45.012969   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:45.013012   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:50.015174   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:50.015215   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:33:55.017392   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:33:55.017439   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:00.019639   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:00.019785   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:00.032493   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:00.032594   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:00.043727   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:00.043800   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:00.054208   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:00.054286   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:00.065178   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:00.065258   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:00.075676   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:00.075753   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:00.087173   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:00.087246   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:00.097126   18442 logs.go:276] 0 containers: []
	W0819 04:34:00.097137   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:00.097196   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:00.107623   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:00.107656   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:00.107663   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:00.121454   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:00.121465   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:00.132793   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:00.132805   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:00.144771   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:00.144785   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:00.156564   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:00.156577   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:00.182677   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:00.182686   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:00.186939   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:00.186949   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:00.283857   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:00.283870   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:00.299278   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:00.299289   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:00.316555   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:00.316565   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:00.328426   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:00.328438   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:00.340377   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:00.340391   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:00.377803   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:00.377815   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:00.394919   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:00.394934   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:00.406805   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:00.406819   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:00.421364   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:00.421378   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:00.466537   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:00.466550   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:02.985620   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:07.987799   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:07.988036   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:08.009161   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:08.009274   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:08.023266   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:08.023341   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:08.036766   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:08.036831   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:08.047240   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:08.047321   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:08.058161   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:08.058230   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:08.068808   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:08.068880   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:08.088252   18442 logs.go:276] 0 containers: []
	W0819 04:34:08.088265   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:08.088332   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:08.104340   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:08.104363   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:08.104369   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:08.117787   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:08.117797   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:08.130694   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:08.130706   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:08.169725   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:08.169737   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:08.181726   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:08.181738   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:08.199110   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:08.199122   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:08.210677   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:08.210687   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:08.234782   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:08.234801   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:08.274737   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:08.274751   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:08.290974   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:08.290986   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:08.310013   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:08.310026   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:08.325332   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:08.325345   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:08.336755   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:08.336767   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:08.341532   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:08.341539   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:08.377164   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:08.377174   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:08.391709   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:08.391721   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:08.406395   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:08.406407   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:10.920235   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:15.922542   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:15.922686   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:15.935441   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:15.935523   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:15.946701   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:15.946766   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:15.957670   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:15.957737   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:15.968448   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:15.968515   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:15.979365   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:15.979432   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:15.991182   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:15.991254   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:16.002855   18442 logs.go:276] 0 containers: []
	W0819 04:34:16.002866   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:16.002922   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:16.013221   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:16.013239   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:16.013245   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:16.051098   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:16.051109   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:16.065992   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:16.066008   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:16.078207   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:16.078219   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:16.093927   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:16.093939   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:16.105563   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:16.105576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:16.116694   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:16.116707   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:16.128758   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:16.128769   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:16.164691   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:16.164705   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:16.182680   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:16.182692   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:16.196071   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:16.196085   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:16.211563   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:16.211576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:16.226481   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:16.226497   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:16.264555   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:16.264568   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:16.278373   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:16.278387   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:16.296220   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:16.296233   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:16.300749   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:16.300759   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:18.826615   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:23.828959   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:23.829068   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:23.847257   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:23.847320   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:23.859740   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:23.859799   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:23.869591   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:23.869666   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:23.880306   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:23.880368   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:23.895422   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:23.895497   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:23.905763   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:23.905836   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:23.916010   18442 logs.go:276] 0 containers: []
	W0819 04:34:23.916023   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:23.916083   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:23.926031   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:23.926051   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:23.926056   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:23.950132   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:23.950151   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:23.954466   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:23.954472   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:23.968372   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:23.968382   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:23.987468   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:23.987478   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:23.998521   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:23.998532   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:24.010709   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:24.010719   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:24.022325   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:24.022337   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:24.059790   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:24.059803   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:24.103654   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:24.103665   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:24.118328   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:24.118339   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:24.134496   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:24.134507   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:24.151046   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:24.151057   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:24.162508   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:24.162519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:24.174110   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:24.174122   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:24.209235   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:24.209247   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:24.224037   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:24.224047   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:26.741364   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:31.743772   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:31.743902   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:31.762113   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:31.762197   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:31.773523   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:31.773595   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:31.788608   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:31.788677   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:31.799554   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:31.799628   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:31.811799   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:31.811873   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:31.822535   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:31.822607   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:31.832783   18442 logs.go:276] 0 containers: []
	W0819 04:34:31.832795   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:31.832852   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:31.843184   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:31.843202   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:31.843207   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:31.855621   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:31.855632   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:31.895345   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:31.895355   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:31.934367   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:31.934378   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:31.948209   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:31.948219   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:31.959505   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:31.959519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:31.971477   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:31.971487   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:31.992707   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:31.992717   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:32.018015   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:32.018027   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:32.022967   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:32.022976   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:32.038548   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:32.038563   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:32.052995   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:32.053006   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:32.067027   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:32.067037   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:32.078399   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:32.078412   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:32.089557   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:32.089568   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:32.124297   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:32.124310   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:32.143114   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:32.143127   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:34.655407   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:39.655684   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:39.655877   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:39.673989   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:39.674085   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:39.688664   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:39.688737   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:39.700665   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:39.700740   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:39.711415   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:39.711486   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:39.721216   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:39.721283   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:39.732286   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:39.732352   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:39.742096   18442 logs.go:276] 0 containers: []
	W0819 04:34:39.742109   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:39.742160   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:39.752805   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:39.752825   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:39.752830   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:39.767375   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:39.767384   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:39.784829   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:39.784838   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:39.819547   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:39.819557   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:39.833181   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:39.833197   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:39.844948   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:39.844958   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:39.860445   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:39.860455   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:39.885724   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:39.885735   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:39.899752   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:39.899762   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:39.938917   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:39.938932   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:39.949776   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:39.949788   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:39.961470   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:39.961480   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:39.972577   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:39.972589   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:40.011616   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:40.011625   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:40.016844   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:40.016852   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:40.034314   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:40.034328   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:40.045593   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:40.045604   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:42.558485   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:47.560749   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:47.560957   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:47.577548   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:47.577637   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:47.590867   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:47.590946   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:47.606573   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:47.606646   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:47.617803   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:47.617879   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:47.629212   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:47.629275   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:47.639908   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:47.639967   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:47.650490   18442 logs.go:276] 0 containers: []
	W0819 04:34:47.650500   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:47.650551   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:47.660970   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:47.660995   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:47.661001   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:47.695143   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:47.695156   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:47.706782   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:47.706795   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:47.730158   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:47.730167   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:47.734137   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:47.734145   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:47.748834   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:47.748847   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:47.764020   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:47.764030   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:47.775981   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:47.775992   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:47.790136   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:47.790147   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:47.809138   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:47.809150   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:47.820140   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:47.820151   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:47.833994   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:47.834005   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:47.847028   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:47.847040   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:47.859394   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:47.859405   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:47.899042   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:47.899051   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:47.937815   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:47.937829   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:47.949828   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:47.949839   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:50.469275   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:34:55.471684   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:34:55.471815   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:34:55.488517   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:34:55.488597   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:34:55.501300   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:34:55.501380   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:34:55.512479   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:34:55.512553   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:34:55.522950   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:34:55.523018   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:34:55.533283   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:34:55.533350   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:34:55.543771   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:34:55.543841   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:34:55.554479   18442 logs.go:276] 0 containers: []
	W0819 04:34:55.554493   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:34:55.554553   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:34:55.569630   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:34:55.569648   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:34:55.569654   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:34:55.611513   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:34:55.611523   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:34:55.625326   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:34:55.625336   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:34:55.639365   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:34:55.639375   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:34:55.676268   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:34:55.676279   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:34:55.690254   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:34:55.690264   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:34:55.703894   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:34:55.703904   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:34:55.721541   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:34:55.721551   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:34:55.736561   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:34:55.736582   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:34:55.752060   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:34:55.752070   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:34:55.777112   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:34:55.777123   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:34:55.789008   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:34:55.789022   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:34:55.793032   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:34:55.793041   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:34:55.831185   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:34:55.831198   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:34:55.845460   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:34:55.845473   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:34:55.856210   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:34:55.856220   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:34:55.871481   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:34:55.871492   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:34:58.384436   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:03.386640   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:03.386783   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:03.403575   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:03.403653   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:03.419259   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:03.419326   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:03.430348   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:03.430424   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:03.442142   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:03.442224   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:03.453047   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:03.453121   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:03.463549   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:03.463621   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:03.474093   18442 logs.go:276] 0 containers: []
	W0819 04:35:03.474105   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:03.474159   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:03.484690   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:03.484709   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:03.484715   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:03.520753   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:03.520766   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:03.540239   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:03.540251   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:03.555293   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:03.555303   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:03.580655   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:03.580666   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:03.595111   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:03.595124   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:03.609681   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:03.609691   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:03.620873   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:03.620884   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:03.632546   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:03.632557   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:03.644283   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:03.644296   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:03.648613   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:03.648622   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:03.687904   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:03.687915   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:03.705455   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:03.705465   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:03.716783   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:03.716793   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:03.728984   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:03.728997   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:03.769323   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:03.769336   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:03.781313   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:03.781322   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:06.297846   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:11.298240   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:11.298407   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:11.315941   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:11.316037   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:11.329774   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:11.329845   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:11.340745   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:11.340809   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:11.351124   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:11.351190   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:11.361978   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:11.362050   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:11.372315   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:11.372378   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:11.383279   18442 logs.go:276] 0 containers: []
	W0819 04:35:11.383292   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:11.383352   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:11.394062   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:11.394079   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:11.394083   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:11.431692   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:11.431703   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:11.436429   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:11.436436   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:11.449174   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:11.449184   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:11.473333   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:11.473342   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:11.491002   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:11.491016   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:11.508688   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:11.508698   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:11.523049   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:11.523061   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:11.559359   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:11.559370   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:11.581347   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:11.581357   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:11.619761   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:11.619776   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:11.634493   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:11.634503   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:11.647260   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:11.647271   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:11.658209   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:11.658223   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:11.670047   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:11.670057   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:11.683942   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:11.683952   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:11.695508   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:11.695520   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:14.210357   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:19.212651   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:19.212929   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:19.238359   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:19.238471   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:19.260197   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:19.260300   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:19.272311   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:19.272380   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:19.283833   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:19.283903   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:19.297832   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:19.297896   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:19.308193   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:19.308266   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:19.318281   18442 logs.go:276] 0 containers: []
	W0819 04:35:19.318293   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:19.318353   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:19.333629   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:19.333651   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:19.333657   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:19.371102   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:19.371117   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:19.406698   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:19.406712   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:19.444716   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:19.444734   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:19.456664   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:19.456676   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:19.469183   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:19.469193   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:19.486088   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:19.486098   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:19.490681   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:19.490690   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:19.505406   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:19.505415   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:19.516955   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:19.516967   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:19.533991   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:19.534004   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:19.548532   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:19.548545   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:19.560063   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:19.560076   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:19.571467   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:19.571481   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:19.587714   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:19.587724   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:19.601821   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:19.601833   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:19.625957   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:19.625969   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:22.141359   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:27.143607   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:27.143997   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:27.181552   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:27.181662   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:27.201160   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:27.201247   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:27.214349   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:27.214418   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:27.226533   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:27.226608   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:27.239328   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:27.239393   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:27.255257   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:27.255330   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:27.265149   18442 logs.go:276] 0 containers: []
	W0819 04:35:27.265160   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:27.265219   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:27.276243   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:27.276260   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:27.276265   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:27.299393   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:27.299402   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:27.303911   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:27.303918   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:27.319487   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:27.319497   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:27.336707   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:27.336716   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:27.349570   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:27.349579   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:27.361048   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:27.361058   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:27.373323   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:27.373333   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:27.385185   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:27.385198   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:27.397134   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:27.397146   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:27.433199   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:27.433209   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:27.471924   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:27.471935   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:27.486484   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:27.486493   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:27.497819   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:27.497831   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:27.512676   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:27.512689   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:27.551755   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:27.551766   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:27.566211   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:27.566223   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:30.084610   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:35.086846   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:35.087030   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:35.106085   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:35.106183   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:35.125365   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:35.125439   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:35.136536   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:35.136610   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:35.148428   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:35.148503   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:35.161947   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:35.162014   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:35.172461   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:35.172540   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:35.183237   18442 logs.go:276] 0 containers: []
	W0819 04:35:35.183248   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:35.183307   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:35.197723   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:35.197744   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:35.197750   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:35.216259   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:35.216270   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:35.230912   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:35.230922   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:35.242515   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:35.242526   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:35.265868   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:35.265882   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:35.278371   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:35.278384   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:35.290519   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:35.290531   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:35.307948   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:35.307958   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:35.345850   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:35.345861   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:35.350077   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:35.350083   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:35.388014   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:35.388024   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:35.403028   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:35.403042   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:35.416711   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:35.416726   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:35.428563   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:35.428576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:35.454801   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:35.454814   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:35.490300   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:35.490310   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:35.505361   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:35.505372   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:38.021883   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:43.024207   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:43.024501   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:43.049151   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:43.049282   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:43.066131   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:43.066224   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:43.079722   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:43.079799   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:43.091123   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:43.091190   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:43.101649   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:43.101718   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:43.111815   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:43.111888   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:43.122631   18442 logs.go:276] 0 containers: []
	W0819 04:35:43.122643   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:43.122698   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:43.144952   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:43.144969   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:43.144975   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:43.160088   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:43.160100   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:43.174306   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:43.174317   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:43.192287   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:43.192301   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:43.203453   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:43.203467   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:43.215532   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:43.215546   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:43.219894   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:43.219901   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:43.237222   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:43.237237   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:43.257884   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:43.257898   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:43.269363   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:43.269374   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:43.293432   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:43.293446   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:43.305151   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:43.305162   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:43.345881   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:43.345891   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:43.360580   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:43.360590   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:43.398854   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:43.398870   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:43.435533   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:43.435544   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:43.452854   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:43.452864   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:45.968600   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:50.970741   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:50.970936   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:50.987870   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:50.987957   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:51.001054   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:51.001123   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:51.011974   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:51.012041   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:51.022813   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:51.022885   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:51.033845   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:51.033913   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:51.044901   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:51.044968   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:51.054770   18442 logs.go:276] 0 containers: []
	W0819 04:35:51.054782   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:51.054838   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:51.065677   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:51.065694   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:51.065700   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:51.077353   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:51.077364   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:51.088848   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:51.088864   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:51.106707   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:51.106718   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:35:51.129853   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:51.129862   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:51.142072   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:51.142086   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:51.178534   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:51.178546   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:51.193553   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:51.193566   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:51.207313   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:51.207325   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:51.219608   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:51.219619   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:51.231368   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:51.231381   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:51.235989   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:51.235996   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:51.250871   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:51.250881   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:51.264682   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:51.264692   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:51.279413   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:51.279424   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:51.293580   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:51.293590   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:51.331289   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:51.331298   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:53.870932   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:35:58.873168   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:35:58.873377   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:35:58.893945   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:35:58.894034   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:35:58.909226   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:35:58.909308   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:35:58.921957   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:35:58.922033   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:35:58.932710   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:35:58.932780   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:35:58.942808   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:35:58.942875   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:35:58.953323   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:35:58.953386   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:35:58.963461   18442 logs.go:276] 0 containers: []
	W0819 04:35:58.963475   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:35:58.963533   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:35:58.974301   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:35:58.974320   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:35:58.974325   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:35:58.988423   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:35:58.988432   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:35:59.002070   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:35:59.002080   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:35:59.013444   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:35:59.013454   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:35:59.050758   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:35:59.050773   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:35:59.085368   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:35:59.085380   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:35:59.099914   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:35:59.099925   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:35:59.111682   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:35:59.111692   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:35:59.115995   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:35:59.116006   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:35:59.127422   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:35:59.127432   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:35:59.144496   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:35:59.144508   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:35:59.156128   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:35:59.156142   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:35:59.171672   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:35:59.171687   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:35:59.183487   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:35:59.183500   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:35:59.198655   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:35:59.198668   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:35:59.237678   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:35:59.237691   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:35:59.250894   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:35:59.250905   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:01.776258   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:06.778663   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:06.779099   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:06.816493   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:06.816628   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:06.837682   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:06.837773   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:06.852095   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:06.852166   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:06.864610   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:06.864678   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:06.875301   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:06.875363   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:06.889501   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:06.889572   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:06.899837   18442 logs.go:276] 0 containers: []
	W0819 04:36:06.899854   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:06.899909   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:06.910347   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:06.910370   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:06.910376   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:06.915038   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:06.915046   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:06.960768   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:06.960778   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:06.972350   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:06.972360   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:06.984381   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:06.984395   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:06.996787   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:06.996802   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:07.020703   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:07.020714   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:07.059159   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:07.059170   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:07.098814   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:07.098825   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:07.114994   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:07.115007   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:07.126691   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:07.126701   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:07.142553   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:07.142566   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:07.157409   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:07.157423   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:07.171153   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:07.171164   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:07.187543   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:07.187552   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:07.206144   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:07.206154   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:07.217548   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:07.217561   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:09.734595   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:14.736539   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:14.736803   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:14.765430   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:14.765566   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:14.786094   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:14.786185   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:14.801434   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:14.801509   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:14.812557   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:14.812626   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:14.828034   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:14.828104   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:14.838564   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:14.838628   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:14.848280   18442 logs.go:276] 0 containers: []
	W0819 04:36:14.848295   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:14.848345   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:14.859018   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:14.859035   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:14.859041   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:14.878394   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:14.878405   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:14.893040   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:14.893050   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:14.904677   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:14.904687   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:14.916395   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:14.916404   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:14.927748   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:14.927758   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:14.939356   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:14.939368   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:14.943357   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:14.943363   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:14.981057   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:14.981068   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:14.995863   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:14.995873   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:15.030977   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:15.030989   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:15.045648   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:15.045658   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:15.063175   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:15.063185   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:15.077814   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:15.077824   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:15.093881   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:15.093891   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:15.116836   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:15.116849   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:15.153655   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:15.153666   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:17.669766   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:22.670082   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:22.670243   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:22.681466   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:22.681534   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:22.692029   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:22.692103   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:22.702276   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:22.702345   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:22.716251   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:22.716325   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:22.727519   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:22.727591   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:22.738461   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:22.738531   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:22.749100   18442 logs.go:276] 0 containers: []
	W0819 04:36:22.749113   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:22.749172   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:22.760205   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:22.760222   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:22.760227   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:22.777206   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:22.777219   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:22.794888   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:22.794900   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:22.806566   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:22.806576   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:22.818555   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:22.818566   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:22.830469   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:22.830481   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:22.834965   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:22.834971   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:22.853574   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:22.853585   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:22.868266   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:22.868278   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:22.879425   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:22.879438   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:22.893712   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:22.893720   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:22.907951   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:22.907960   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:22.919135   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:22.919145   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:22.935836   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:22.935846   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:22.959864   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:22.959875   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:22.999069   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:22.999078   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:23.034763   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:23.034776   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:25.576147   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:30.578306   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:30.578537   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:30.592779   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:30.592863   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:30.604225   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:30.604290   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:30.614641   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:30.614709   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:30.625466   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:30.625543   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:30.635989   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:30.636055   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:30.646508   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:30.646575   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:30.656839   18442 logs.go:276] 0 containers: []
	W0819 04:36:30.656850   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:30.656902   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:30.667188   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:30.667207   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:30.667213   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:30.679115   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:30.679126   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:30.691259   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:30.691270   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:30.730998   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:30.731015   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:30.749134   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:30.749145   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:30.763878   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:30.763887   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:30.783295   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:30.783305   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:30.803535   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:30.803544   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:30.815305   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:30.815316   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:30.830504   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:30.830519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:30.842638   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:30.842648   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:30.865124   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:30.865132   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:30.877033   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:30.877044   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:30.881637   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:30.881646   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:30.916185   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:30.916200   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:30.954182   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:30.954192   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:30.970554   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:30.970564   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:33.484189   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:38.486325   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:38.486524   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:38.500365   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:38.500445   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:38.511240   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:38.511313   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:38.522495   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:38.522561   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:38.533228   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:38.533312   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:38.543819   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:38.543890   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:38.554271   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:38.554344   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:38.564166   18442 logs.go:276] 0 containers: []
	W0819 04:36:38.564176   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:38.564240   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:38.574346   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:38.574366   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:38.574372   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:38.613590   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:38.613600   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:38.651489   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:38.651501   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:38.665927   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:38.665940   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:38.677819   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:38.677830   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:38.715936   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:38.715947   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:38.734796   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:38.734809   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:38.750501   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:38.750511   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:38.774033   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:38.774052   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:38.789702   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:38.789713   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:38.801721   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:38.801735   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:38.820129   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:38.820140   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:38.831543   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:38.831555   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:38.835625   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:38.835631   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:38.849990   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:38.850001   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:38.865129   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:38.865139   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:38.876803   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:38.876818   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:41.389245   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:46.391587   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:46.391795   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:46.415636   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:46.415721   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:46.429521   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:46.429598   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:46.440589   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:46.440661   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:46.450727   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:46.450795   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:46.461592   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:46.461670   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:46.472657   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:46.472725   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:46.484111   18442 logs.go:276] 0 containers: []
	W0819 04:36:46.484126   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:46.484186   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:46.494305   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:46.494322   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:46.494327   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:46.516169   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:46.516181   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:46.528185   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:46.528198   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:46.532771   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:46.532780   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:46.547255   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:46.547265   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:46.558856   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:46.558867   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:46.579630   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:46.579643   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:46.602412   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:46.602425   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:46.634454   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:46.634468   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:46.646597   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:46.646608   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:46.680551   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:46.680563   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:46.694548   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:46.694560   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:46.732774   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:46.732785   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:46.745178   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:46.745190   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:46.784313   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:46.784326   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:46.799456   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:46.799470   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:46.823396   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:46.823407   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:49.336747   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:36:54.338901   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:36:54.339106   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:36:54.359500   18442 logs.go:276] 2 containers: [c8fa750d9da6 3985c1d649a7]
	I0819 04:36:54.359603   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:36:54.374475   18442 logs.go:276] 2 containers: [02704fafe517 f269961c577e]
	I0819 04:36:54.374560   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:36:54.386261   18442 logs.go:276] 1 containers: [19b7edbe3a9f]
	I0819 04:36:54.386324   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:36:54.396909   18442 logs.go:276] 2 containers: [1b07c6af2b17 069748194d02]
	I0819 04:36:54.396970   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:36:54.407675   18442 logs.go:276] 1 containers: [140eb9e8ffdb]
	I0819 04:36:54.407743   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:36:54.419057   18442 logs.go:276] 2 containers: [da57f69785e6 390cd57e246c]
	I0819 04:36:54.419129   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:36:54.429210   18442 logs.go:276] 0 containers: []
	W0819 04:36:54.429223   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:36:54.429280   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:36:54.439680   18442 logs.go:276] 2 containers: [a7ed00b2340f b8b9fabe936d]
	I0819 04:36:54.439699   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:36:54.439705   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:36:54.475009   18442 logs.go:123] Gathering logs for kube-apiserver [c8fa750d9da6] ...
	I0819 04:36:54.475021   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8fa750d9da6"
	I0819 04:36:54.489411   18442 logs.go:123] Gathering logs for kube-apiserver [3985c1d649a7] ...
	I0819 04:36:54.489425   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3985c1d649a7"
	I0819 04:36:54.528600   18442 logs.go:123] Gathering logs for coredns [19b7edbe3a9f] ...
	I0819 04:36:54.528614   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19b7edbe3a9f"
	I0819 04:36:54.540016   18442 logs.go:123] Gathering logs for storage-provisioner [b8b9fabe936d] ...
	I0819 04:36:54.540029   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8b9fabe936d"
	I0819 04:36:54.551532   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:36:54.551544   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:36:54.572659   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:36:54.572667   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:36:54.611101   18442 logs.go:123] Gathering logs for etcd [f269961c577e] ...
	I0819 04:36:54.611115   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f269961c577e"
	I0819 04:36:54.630067   18442 logs.go:123] Gathering logs for kube-scheduler [1b07c6af2b17] ...
	I0819 04:36:54.630079   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b07c6af2b17"
	I0819 04:36:54.641699   18442 logs.go:123] Gathering logs for kube-proxy [140eb9e8ffdb] ...
	I0819 04:36:54.641711   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 140eb9e8ffdb"
	I0819 04:36:54.653919   18442 logs.go:123] Gathering logs for storage-provisioner [a7ed00b2340f] ...
	I0819 04:36:54.653932   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ed00b2340f"
	I0819 04:36:54.665417   18442 logs.go:123] Gathering logs for etcd [02704fafe517] ...
	I0819 04:36:54.665427   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02704fafe517"
	I0819 04:36:54.682505   18442 logs.go:123] Gathering logs for kube-scheduler [069748194d02] ...
	I0819 04:36:54.682518   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 069748194d02"
	I0819 04:36:54.698612   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:36:54.698622   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:36:54.702583   18442 logs.go:123] Gathering logs for kube-controller-manager [da57f69785e6] ...
	I0819 04:36:54.702592   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da57f69785e6"
	I0819 04:36:54.720439   18442 logs.go:123] Gathering logs for kube-controller-manager [390cd57e246c] ...
	I0819 04:36:54.720450   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 390cd57e246c"
	I0819 04:36:54.736526   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:36:54.736539   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:36:57.249533   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:02.251856   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:02.251996   18442 kubeadm.go:597] duration metric: took 4m3.883317042s to restartPrimaryControlPlane
	W0819 04:37:02.252137   18442 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 04:37:02.252194   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 04:37:03.328188   18442 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076003041s)
	I0819 04:37:03.328244   18442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 04:37:03.333386   18442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 04:37:03.336263   18442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 04:37:03.339005   18442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 04:37:03.339011   18442 kubeadm.go:157] found existing configuration files:
	
	I0819 04:37:03.339034   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf
	I0819 04:37:03.341808   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 04:37:03.341829   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 04:37:03.344597   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf
	I0819 04:37:03.347807   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 04:37:03.347825   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 04:37:03.350487   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf
	I0819 04:37:03.353090   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 04:37:03.353116   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 04:37:03.356399   18442 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf
	I0819 04:37:03.359853   18442 kubeadm.go:163] "https://control-plane.minikube.internal:53420" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53420 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 04:37:03.359878   18442 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 04:37:03.362770   18442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 04:37:03.381476   18442 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 04:37:03.381507   18442 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 04:37:03.432333   18442 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 04:37:03.432393   18442 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 04:37:03.432450   18442 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 04:37:03.485102   18442 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 04:37:03.493291   18442 out.go:235]   - Generating certificates and keys ...
	I0819 04:37:03.493325   18442 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 04:37:03.493353   18442 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 04:37:03.493425   18442 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 04:37:03.493485   18442 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 04:37:03.493555   18442 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 04:37:03.493583   18442 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 04:37:03.493633   18442 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 04:37:03.493665   18442 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 04:37:03.493706   18442 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 04:37:03.493745   18442 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 04:37:03.493763   18442 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 04:37:03.493795   18442 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 04:37:03.611579   18442 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 04:37:03.725338   18442 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 04:37:03.770146   18442 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 04:37:03.926758   18442 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 04:37:03.957772   18442 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 04:37:03.958139   18442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 04:37:03.958178   18442 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 04:37:04.043932   18442 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 04:37:04.048131   18442 out.go:235]   - Booting up control plane ...
	I0819 04:37:04.048179   18442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 04:37:04.048222   18442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 04:37:04.048262   18442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 04:37:04.048306   18442 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 04:37:04.048405   18442 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 04:37:09.044540   18442 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002590 seconds
	I0819 04:37:09.044656   18442 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 04:37:09.049172   18442 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 04:37:09.565382   18442 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 04:37:09.565714   18442 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-783000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 04:37:10.074205   18442 kubeadm.go:310] [bootstrap-token] Using token: rv7b32.4t5lzmukqj5o3yq7
	I0819 04:37:10.080652   18442 out.go:235]   - Configuring RBAC rules ...
	I0819 04:37:10.080783   18442 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 04:37:10.080943   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 04:37:10.087791   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 04:37:10.089227   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 04:37:10.090963   18442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 04:37:10.092416   18442 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 04:37:10.097161   18442 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 04:37:10.277847   18442 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 04:37:10.480613   18442 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 04:37:10.481044   18442 kubeadm.go:310] 
	I0819 04:37:10.481079   18442 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 04:37:10.481086   18442 kubeadm.go:310] 
	I0819 04:37:10.481141   18442 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 04:37:10.481165   18442 kubeadm.go:310] 
	I0819 04:37:10.481184   18442 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 04:37:10.481264   18442 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 04:37:10.481305   18442 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 04:37:10.481325   18442 kubeadm.go:310] 
	I0819 04:37:10.481362   18442 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 04:37:10.481369   18442 kubeadm.go:310] 
	I0819 04:37:10.481403   18442 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 04:37:10.481410   18442 kubeadm.go:310] 
	I0819 04:37:10.481446   18442 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 04:37:10.481490   18442 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 04:37:10.481575   18442 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 04:37:10.481613   18442 kubeadm.go:310] 
	I0819 04:37:10.481710   18442 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 04:37:10.481773   18442 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 04:37:10.481777   18442 kubeadm.go:310] 
	I0819 04:37:10.481829   18442 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rv7b32.4t5lzmukqj5o3yq7 \
	I0819 04:37:10.481881   18442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 \
	I0819 04:37:10.481893   18442 kubeadm.go:310] 	--control-plane 
	I0819 04:37:10.481895   18442 kubeadm.go:310] 
	I0819 04:37:10.481987   18442 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 04:37:10.481993   18442 kubeadm.go:310] 
	I0819 04:37:10.482049   18442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rv7b32.4t5lzmukqj5o3yq7 \
	I0819 04:37:10.482122   18442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fdec06fb19d9977c9b3b338deaa57f7eb3ba1844358bb196808407a1fb1d5577 
	I0819 04:37:10.482196   18442 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 04:37:10.482207   18442 cni.go:84] Creating CNI manager for ""
	I0819 04:37:10.482216   18442 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:37:10.489180   18442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 04:37:10.493235   18442 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 04:37:10.496646   18442 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 04:37:10.501609   18442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 04:37:10.501658   18442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-783000 minikube.k8s.io/updated_at=2024_08_19T04_37_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c539cede7c104fd836c3af55c4ca24a6409a3ce6 minikube.k8s.io/name=stopped-upgrade-783000 minikube.k8s.io/primary=true
	I0819 04:37:10.501658   18442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 04:37:10.509249   18442 ops.go:34] apiserver oom_adj: -16
	I0819 04:37:10.536193   18442 kubeadm.go:1113] duration metric: took 34.572958ms to wait for elevateKubeSystemPrivileges
	I0819 04:37:10.544660   18442 kubeadm.go:394] duration metric: took 4m12.191839417s to StartCluster
	I0819 04:37:10.544680   18442 settings.go:142] acquiring lock: {Name:mk0efade08e7fded56aa74c9b61139ee991f6648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:37:10.544774   18442 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:37:10.545214   18442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/kubeconfig: {Name:mkc1a7b531aa1d2d8dba135f7548c07a5ca371ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:37:10.545436   18442 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:37:10.545535   18442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:37:10.545480   18442 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 04:37:10.545551   18442 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-783000"
	I0819 04:37:10.545564   18442 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-783000"
	I0819 04:37:10.545579   18442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-783000"
	I0819 04:37:10.545569   18442 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-783000"
	W0819 04:37:10.545594   18442 addons.go:243] addon storage-provisioner should already be in state true
	I0819 04:37:10.545604   18442 host.go:66] Checking if "stopped-upgrade-783000" exists ...
	I0819 04:37:10.548167   18442 out.go:177] * Verifying Kubernetes components...
	I0819 04:37:10.548860   18442 kapi.go:59] client config for stopped-upgrade-783000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/stopped-upgrade-783000/client.key", CAFile:"/Users/jenkins/minikube-integration/19479-15750/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1021bd610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 04:37:10.552503   18442 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-783000"
	W0819 04:37:10.552508   18442 addons.go:243] addon default-storageclass should already be in state true
	I0819 04:37:10.552519   18442 host.go:66] Checking if "stopped-upgrade-783000" exists ...
	I0819 04:37:10.553097   18442 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 04:37:10.553103   18442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 04:37:10.553108   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:37:10.556191   18442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 04:37:10.562340   18442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 04:37:10.566227   18442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:37:10.566239   18442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 04:37:10.566250   18442 sshutil.go:53] new ssh client: &{IP:localhost Port:53385 SSHKeyPath:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/stopped-upgrade-783000/id_rsa Username:docker}
	I0819 04:37:10.657907   18442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 04:37:10.663451   18442 api_server.go:52] waiting for apiserver process to appear ...
	I0819 04:37:10.663501   18442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 04:37:10.667554   18442 api_server.go:72] duration metric: took 122.106459ms to wait for apiserver process to appear ...
	I0819 04:37:10.667563   18442 api_server.go:88] waiting for apiserver healthz status ...
	I0819 04:37:10.667571   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:10.710813   18442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 04:37:10.749912   18442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 04:37:11.110601   18442 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 04:37:11.110615   18442 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 04:37:15.669479   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:15.669531   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:20.669828   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:20.669869   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:25.670230   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:25.670280   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:30.670588   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:30.670612   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:35.671081   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:35.671166   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:40.671780   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:40.671808   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 04:37:41.112404   18442 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 04:37:41.120728   18442 out.go:177] * Enabled addons: storage-provisioner
	I0819 04:37:41.126632   18442 addons.go:510] duration metric: took 30.5818435s for enable addons: enabled=[storage-provisioner]
	I0819 04:37:45.672645   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:45.672713   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:50.674178   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:50.674214   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:37:55.675691   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:37:55.675715   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:00.677789   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:00.677809   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:05.678774   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:05.678806   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:10.680959   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:10.681118   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:10.696821   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:10.696900   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:10.708870   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:10.708935   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:10.719464   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:10.719540   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:10.729997   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:10.730069   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:10.740131   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:10.740194   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:10.750370   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:10.750428   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:10.760690   18442 logs.go:276] 0 containers: []
	W0819 04:38:10.760702   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:10.760757   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:10.770733   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:10.770752   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:10.770762   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:10.775045   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:10.775051   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:10.810338   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:10.810352   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:10.822065   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:10.822075   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:10.837790   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:10.837800   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:38:10.849821   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:10.849835   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:10.861216   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:10.861230   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:10.872475   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:10.872486   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:10.911658   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:10.911668   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:10.925807   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:10.925816   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:10.947655   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:10.947666   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:10.966333   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:10.966341   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:10.990747   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:10.990754   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:13.508258   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:18.511085   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:18.511541   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:18.546341   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:18.546469   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:18.572913   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:18.573000   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:18.590371   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:18.590444   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:18.602563   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:18.602630   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:18.612984   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:18.613052   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:18.624016   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:18.624081   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:18.634196   18442 logs.go:276] 0 containers: []
	W0819 04:38:18.634207   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:18.634262   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:18.644669   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:18.644683   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:18.644689   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:18.656425   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:18.656438   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:18.674780   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:18.674792   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:18.686811   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:18.686825   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:18.699495   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:18.699509   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:18.737620   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:18.737630   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:18.741608   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:18.741614   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:18.775741   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:18.775755   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:18.787557   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:18.787567   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:18.812610   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:18.812621   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:18.835168   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:18.835179   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:18.851614   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:18.851631   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:18.867026   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:18.867036   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:38:21.380600   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:26.383362   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:26.383642   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:26.409404   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:26.409523   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:26.427109   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:26.427197   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:26.439979   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:26.440049   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:26.451251   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:26.451314   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:26.461359   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:26.461419   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:26.471664   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:26.471721   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:26.481562   18442 logs.go:276] 0 containers: []
	W0819 04:38:26.481571   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:26.481619   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:26.492089   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:26.492102   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:26.492108   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:26.496817   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:26.496826   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:26.508359   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:26.508371   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:26.520561   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:26.520572   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:26.535499   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:26.535506   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:26.552453   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:26.552460   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:26.568027   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:26.568036   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:26.582332   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:26.582340   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:26.621970   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:26.621985   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:26.639193   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:26.639206   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:26.653809   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:26.653831   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:38:26.665264   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:26.665273   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:26.688805   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:26.688813   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:29.226281   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:34.228905   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:34.229267   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:34.263928   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:34.264041   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:34.283592   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:34.283681   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:34.297853   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:34.297915   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:34.310094   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:34.310158   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:34.320549   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:34.320620   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:34.336766   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:34.336837   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:34.347090   18442 logs.go:276] 0 containers: []
	W0819 04:38:34.347104   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:34.347162   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:34.357453   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:34.357469   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:34.357474   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:34.375283   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:34.375293   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:34.386156   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:34.386167   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:34.401844   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:34.401855   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:38:34.413393   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:34.413403   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:34.424611   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:34.424625   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:34.429015   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:34.429022   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:34.463700   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:34.463711   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:34.478420   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:34.478431   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:34.489953   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:34.489963   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:34.512963   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:34.512977   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:34.549381   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:34.549391   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:34.560866   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:34.560878   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:37.087534   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:42.090214   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:42.090607   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:42.130474   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:42.130631   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:42.151171   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:42.151276   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:42.167159   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:42.167223   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:42.179579   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:42.179649   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:42.190403   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:42.190474   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:42.200539   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:42.200597   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:42.214701   18442 logs.go:276] 0 containers: []
	W0819 04:38:42.214713   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:42.214768   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:42.228885   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:42.228901   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:42.228907   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:42.252198   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:42.252207   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:42.289347   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:42.289358   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:42.293409   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:42.293419   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:42.326838   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:42.326847   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:42.340824   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:42.340833   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:38:42.352406   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:42.352418   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:42.371284   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:42.371296   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:42.383060   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:42.383072   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:42.399374   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:42.399384   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:42.414167   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:42.414179   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:42.425735   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:42.425748   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:42.437615   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:42.437628   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:44.954864   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:49.957605   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:49.957978   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:50.006020   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:50.006134   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:50.023892   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:50.023966   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:50.037979   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:50.038055   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:50.049386   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:50.049450   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:50.060328   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:50.060406   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:50.070844   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:50.070912   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:50.080655   18442 logs.go:276] 0 containers: []
	W0819 04:38:50.080664   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:50.080713   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:50.090843   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:50.090858   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:50.090865   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:50.112542   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:50.112556   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:50.124635   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:50.124648   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:50.143045   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:50.143059   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:50.156652   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:50.156665   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:50.172137   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:50.172147   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:38:50.183986   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:50.183999   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:50.195552   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:50.195570   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:50.218985   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:50.218993   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:50.230201   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:50.230214   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:50.266912   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:50.266920   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:50.271237   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:50.271245   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:50.307626   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:50.307638   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:52.821752   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:38:57.824511   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:38:57.824929   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:38:57.863018   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:38:57.863146   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:38:57.885234   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:38:57.885344   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:38:57.901103   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:38:57.901181   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:38:57.913395   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:38:57.913455   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:38:57.924403   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:38:57.924461   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:38:57.934705   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:38:57.934773   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:38:57.944765   18442 logs.go:276] 0 containers: []
	W0819 04:38:57.944779   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:38:57.944830   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:38:57.955092   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:38:57.955110   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:38:57.955115   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:38:57.992353   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:38:57.992364   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:38:58.004216   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:38:58.004228   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:38:58.019165   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:38:58.019176   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:38:58.036985   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:38:58.036995   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:38:58.048187   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:38:58.048198   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:38:58.059348   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:38:58.059359   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:38:58.082760   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:38:58.082769   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:38:58.119503   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:38:58.119514   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:38:58.124032   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:38:58.124041   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:38:58.138189   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:38:58.138202   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:38:58.152072   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:38:58.152084   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:38:58.166740   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:38:58.166753   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:00.678827   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:05.681109   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:05.681336   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:05.708181   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:05.708283   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:05.726700   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:05.726777   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:05.739269   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:39:05.739343   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:05.751044   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:05.751103   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:05.761699   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:05.761763   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:05.772333   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:05.772388   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:05.782945   18442 logs.go:276] 0 containers: []
	W0819 04:39:05.782955   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:05.783012   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:05.793479   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:05.793495   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:05.793501   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:05.805839   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:05.805850   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:05.817594   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:05.817604   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:05.829454   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:05.829466   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:05.840809   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:05.840818   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:05.865306   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:05.865314   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:05.879591   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:05.879601   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:05.913635   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:05.913644   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:05.929310   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:05.929320   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:05.946551   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:05.946559   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:05.961638   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:05.961649   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:05.985671   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:05.985681   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:06.022470   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:06.022478   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:08.528612   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:13.530798   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:13.531046   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:13.556366   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:13.556492   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:13.576026   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:13.576103   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:13.590542   18442 logs.go:276] 2 containers: [893df8098815 18cd291ffa7e]
	I0819 04:39:13.590614   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:13.601520   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:13.601589   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:13.612012   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:13.612080   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:13.622113   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:13.622172   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:13.631924   18442 logs.go:276] 0 containers: []
	W0819 04:39:13.631936   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:13.631990   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:13.642630   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:13.642644   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:13.642648   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:13.681769   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:13.681776   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:13.725318   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:13.725333   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:13.740165   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:13.740177   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:13.754104   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:13.754113   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:13.773140   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:13.773153   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:13.790139   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:13.790147   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:13.813766   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:13.813774   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:13.817651   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:13.817657   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:13.829169   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:13.829179   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:13.841387   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:13.841397   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:13.852619   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:13.852630   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:13.864183   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:13.864197   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:16.376054   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:21.378503   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:21.378675   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:21.392790   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:21.392870   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:21.404868   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:21.404933   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:21.414907   18442 logs.go:276] 3 containers: [c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:39:21.414974   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:21.424630   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:21.424696   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:21.446333   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:21.446403   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:21.456761   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:21.456827   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:21.467842   18442 logs.go:276] 0 containers: []
	W0819 04:39:21.467855   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:21.467914   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:21.478627   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:21.478645   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:21.478651   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:21.515549   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:39:21.515559   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:39:21.526896   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:21.526910   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:21.551812   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:21.551820   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:21.564732   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:21.564746   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:21.578641   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:21.578653   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:21.592432   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:21.592445   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:21.596693   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:21.596701   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:21.612081   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:21.612094   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:21.623407   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:21.623419   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:21.641078   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:21.641088   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:21.674756   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:21.674770   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:21.686449   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:21.686462   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:21.698079   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:21.698091   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:24.211428   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:29.213928   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:29.213998   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:29.225093   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:29.225161   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:29.235973   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:29.236038   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:29.253109   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:39:29.253175   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:29.263920   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:29.263974   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:29.275338   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:29.275417   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:29.286219   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:29.286288   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:29.295974   18442 logs.go:276] 0 containers: []
	W0819 04:39:29.295987   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:29.296033   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:29.306414   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:29.306430   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:29.306435   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:29.345604   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:29.345613   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:29.356998   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:29.357008   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:29.373560   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:29.373574   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:29.408743   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:29.408753   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:29.422857   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:29.422870   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:29.435052   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:29.435063   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:29.446909   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:29.446923   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:29.460063   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:29.460077   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:29.478816   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:29.478827   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:29.504821   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:29.504831   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:29.517050   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:29.517062   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:29.521361   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:29.521367   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:29.535912   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:39:29.535924   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:39:29.547216   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:39:29.547227   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:39:32.060403   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:37.063211   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:37.063614   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:37.102472   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:37.102603   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:37.131051   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:37.131157   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:37.152913   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:39:37.152984   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:37.164194   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:37.164264   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:37.174943   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:37.175015   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:37.185631   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:37.185690   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:37.196437   18442 logs.go:276] 0 containers: []
	W0819 04:39:37.196449   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:37.196513   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:37.206979   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:37.206994   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:37.206998   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:37.226689   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:39:37.226700   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:39:37.238212   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:37.238226   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:37.249483   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:37.249495   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:37.261075   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:37.261089   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:37.273353   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:37.273366   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:37.307610   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:37.307622   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:37.319240   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:37.319253   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:37.332957   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:39:37.332968   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:39:37.344241   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:37.344252   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:37.359377   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:37.359389   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:37.377650   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:37.377663   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:37.382025   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:37.382034   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:37.400354   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:37.400366   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:37.436773   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:37.436785   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:39.961757   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:44.971328   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:44.971668   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:45.000954   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:45.001076   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:45.018168   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:45.018256   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:45.032123   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:39:45.032198   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:45.043804   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:45.043871   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:45.054003   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:45.054069   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:45.065347   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:45.065410   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:45.076058   18442 logs.go:276] 0 containers: []
	W0819 04:39:45.076071   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:45.076122   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:45.086413   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:45.086430   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:45.086435   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:45.091118   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:45.091126   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:45.130419   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:45.130431   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:45.144485   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:45.144499   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:45.181711   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:45.181722   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:45.195816   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:45.195830   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:45.212235   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:45.212249   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:45.224056   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:39:45.224069   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:39:45.235384   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:45.235396   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:45.253185   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:45.253198   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:45.268621   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:45.268632   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:45.292134   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:39:45.292142   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:39:45.304512   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:45.304524   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:45.315682   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:45.315695   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:45.333026   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:45.333037   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:47.849896   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:39:52.855004   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:39:52.855060   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:39:52.867614   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:39:52.867678   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:39:52.881527   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:39:52.881588   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:39:52.892977   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:39:52.893038   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:39:52.903611   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:39:52.903672   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:39:52.915042   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:39:52.915108   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:39:52.927400   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:39:52.927450   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:39:52.937302   18442 logs.go:276] 0 containers: []
	W0819 04:39:52.937312   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:39:52.937363   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:39:52.948233   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:39:52.948248   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:39:52.948253   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:39:52.975047   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:39:52.975055   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:39:52.999731   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:39:52.999746   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:39:53.005497   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:39:53.005511   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:39:53.017811   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:39:53.017996   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:39:53.031838   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:39:53.031851   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:39:53.048984   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:39:53.049000   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:39:53.088482   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:39:53.088499   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:39:53.103300   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:39:53.103312   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:39:53.116695   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:39:53.116705   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:39:53.128454   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:39:53.128463   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:39:53.140191   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:39:53.140204   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:39:53.152686   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:39:53.152697   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:39:53.195676   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:39:53.195686   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:39:53.207544   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:39:53.207552   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:39:55.726100   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:00.731243   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:00.731352   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:00.744679   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:00.744753   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:00.756954   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:00.757030   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:00.769409   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:00.769482   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:00.782694   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:00.782770   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:00.794727   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:00.794806   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:00.807041   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:00.807116   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:00.826264   18442 logs.go:276] 0 containers: []
	W0819 04:40:00.826276   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:00.826337   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:00.838667   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:00.838690   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:00.838697   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:00.843729   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:00.843742   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:00.859766   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:00.859778   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:00.874111   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:00.874121   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:00.886107   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:00.886121   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:00.897962   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:00.897972   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:00.921450   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:00.921459   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:00.964579   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:00.964596   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:00.980155   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:00.980164   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:00.992714   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:00.992727   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:01.004066   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:01.004076   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:01.027620   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:01.027628   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:01.041962   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:01.041974   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:01.079218   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:01.079229   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:01.090566   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:01.090579   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:03.607952   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:08.611928   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:08.612342   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:08.648791   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:08.648915   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:08.672635   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:08.672715   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:08.686637   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:08.686704   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:08.698027   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:08.698093   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:08.709402   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:08.709460   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:08.720505   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:08.720573   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:08.732903   18442 logs.go:276] 0 containers: []
	W0819 04:40:08.732915   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:08.732971   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:08.760260   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:08.760279   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:08.760284   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:08.797883   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:08.797894   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:08.813083   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:08.813094   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:08.825138   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:08.825150   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:08.840078   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:08.840091   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:08.851938   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:08.851952   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:08.868416   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:08.868428   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:08.890742   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:08.890752   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:08.915325   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:08.915335   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:08.927224   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:08.927235   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:08.966360   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:08.966372   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:08.978346   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:08.978359   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:08.990728   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:08.990741   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:09.001815   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:09.001828   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:09.006740   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:09.006748   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:11.525413   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:16.529147   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:16.529213   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:16.542051   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:16.542128   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:16.553814   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:16.553865   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:16.564571   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:16.564650   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:16.576475   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:16.576539   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:16.592441   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:16.592497   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:16.604024   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:16.604079   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:16.615378   18442 logs.go:276] 0 containers: []
	W0819 04:40:16.615391   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:16.615449   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:16.626781   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:16.626798   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:16.626803   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:16.643735   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:16.643744   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:16.658179   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:16.658193   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:16.675652   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:16.675664   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:16.688765   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:16.688776   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:16.712910   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:16.712923   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:16.726164   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:16.726176   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:16.749467   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:16.749480   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:16.769005   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:16.769021   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:16.809288   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:16.809303   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:16.814350   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:16.814360   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:16.826337   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:16.826346   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:16.863542   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:16.863551   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:16.877211   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:16.877223   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:16.890842   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:16.890852   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:19.408425   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:24.411006   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:24.411248   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:24.432751   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:24.432857   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:24.448528   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:24.448595   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:24.461159   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:24.461239   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:24.472761   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:24.472822   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:24.483961   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:24.484030   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:24.494584   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:24.494659   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:24.504921   18442 logs.go:276] 0 containers: []
	W0819 04:40:24.504932   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:24.504987   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:24.515645   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:24.515661   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:24.515666   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:24.530162   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:24.530174   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:24.541443   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:24.541453   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:24.559538   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:24.559552   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:24.584578   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:24.584589   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:24.621409   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:24.621420   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:24.633094   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:24.633102   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:24.644528   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:24.644540   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:24.658386   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:24.658399   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:24.670335   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:24.670345   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:24.681829   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:24.681839   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:24.693606   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:24.693617   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:24.708964   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:24.708974   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:24.720233   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:24.720248   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:24.724895   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:24.724904   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:27.260530   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:32.263078   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:32.263480   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:32.301743   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:32.301877   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:32.322258   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:32.322360   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:32.337313   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:32.337386   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:32.349442   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:32.349501   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:32.360242   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:32.360311   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:32.370573   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:32.370635   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:32.380664   18442 logs.go:276] 0 containers: []
	W0819 04:40:32.380675   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:32.380730   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:32.392876   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:32.392891   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:32.392896   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:32.432045   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:32.432056   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:32.436115   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:32.436126   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:32.447348   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:32.447361   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:32.470665   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:32.470674   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:32.481820   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:32.481829   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:32.496342   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:32.496354   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:32.510075   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:32.510088   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:32.522087   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:32.522100   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:32.537019   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:32.537028   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:32.549183   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:32.549193   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:32.560603   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:32.560615   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:32.572515   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:32.572528   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:32.590081   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:32.590092   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:32.626412   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:32.626425   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:35.140237   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:40.142278   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:40.142352   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:40.154595   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:40.154664   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:40.166778   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:40.166827   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:40.178081   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:40.178141   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:40.189680   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:40.189740   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:40.200634   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:40.200704   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:40.212195   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:40.212249   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:40.223086   18442 logs.go:276] 0 containers: []
	W0819 04:40:40.223099   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:40.223171   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:40.234065   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:40.234085   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:40.234090   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:40.251456   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:40.251475   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:40.266073   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:40.266085   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:40.292130   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:40.292146   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:40.305272   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:40.305284   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:40.345713   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:40.345725   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:40.358669   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:40.358679   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:40.375452   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:40.375462   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:40.391789   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:40.391800   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:40.405644   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:40.405656   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:40.424564   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:40.424581   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:40.438016   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:40.438027   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:40.443073   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:40.443088   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:40.480649   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:40.480661   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:40.496431   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:40.496442   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:43.015278   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:48.018233   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:48.018715   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:48.057001   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:48.057140   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:48.078546   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:48.078662   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:48.093959   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:48.094035   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:48.106125   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:48.106187   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:48.117297   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:48.117362   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:48.127697   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:48.127759   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:48.137841   18442 logs.go:276] 0 containers: []
	W0819 04:40:48.137851   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:48.137898   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:48.148157   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:48.148178   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:48.148184   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:48.181937   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:48.181949   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:48.193844   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:48.193855   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:48.209104   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:48.209117   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:48.227195   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:48.227209   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:48.233386   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:48.233398   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:48.261754   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:48.261766   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:48.275705   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:48.275715   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:48.289923   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:48.289935   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:48.302044   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:48.302058   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:48.313702   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:48.313716   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:48.325267   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:48.325278   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:48.350567   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:48.350574   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:48.388668   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:48.388680   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:48.404678   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:48.404692   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:50.918859   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:40:55.920093   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:40:55.920371   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:40:55.958234   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:40:55.958384   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:40:55.979565   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:40:55.979675   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:40:55.994463   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:40:55.994536   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:40:56.007861   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:40:56.007926   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:40:56.018521   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:40:56.018589   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:40:56.028942   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:40:56.029011   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:40:56.039083   18442 logs.go:276] 0 containers: []
	W0819 04:40:56.039094   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:40:56.039151   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:40:56.049989   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:40:56.050006   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:40:56.050012   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:40:56.061733   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:40:56.061747   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:40:56.086424   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:40:56.086437   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:40:56.097882   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:40:56.097894   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:40:56.136429   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:40:56.136439   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:40:56.148711   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:40:56.148725   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:40:56.160057   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:40:56.160070   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:40:56.171448   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:40:56.171462   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:40:56.194562   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:40:56.194571   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:40:56.199022   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:40:56.199031   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:40:56.233505   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:40:56.233519   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:40:56.248315   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:40:56.248326   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:40:56.261688   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:40:56.261697   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:40:56.276691   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:40:56.276704   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:40:56.289112   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:40:56.289124   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:40:58.802615   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:41:03.805318   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:41:03.805380   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 04:41:03.817879   18442 logs.go:276] 1 containers: [3532d65b8a5a]
	I0819 04:41:03.817929   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 04:41:03.828963   18442 logs.go:276] 1 containers: [0e52abab1282]
	I0819 04:41:03.829022   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 04:41:03.840275   18442 logs.go:276] 4 containers: [006414e2b5ad c75fef96db8a 893df8098815 18cd291ffa7e]
	I0819 04:41:03.840335   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 04:41:03.856241   18442 logs.go:276] 1 containers: [7e19570fc95e]
	I0819 04:41:03.856294   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 04:41:03.867163   18442 logs.go:276] 1 containers: [f4ef51a77896]
	I0819 04:41:03.867217   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 04:41:03.878063   18442 logs.go:276] 1 containers: [71655a8d5baa]
	I0819 04:41:03.878128   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 04:41:03.889397   18442 logs.go:276] 0 containers: []
	W0819 04:41:03.889409   18442 logs.go:278] No container was found matching "kindnet"
	I0819 04:41:03.889476   18442 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 04:41:03.908386   18442 logs.go:276] 1 containers: [74cf69aed6bf]
	I0819 04:41:03.908406   18442 logs.go:123] Gathering logs for coredns [c75fef96db8a] ...
	I0819 04:41:03.908411   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c75fef96db8a"
	I0819 04:41:03.921240   18442 logs.go:123] Gathering logs for kube-proxy [f4ef51a77896] ...
	I0819 04:41:03.921251   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4ef51a77896"
	I0819 04:41:03.933567   18442 logs.go:123] Gathering logs for container status ...
	I0819 04:41:03.933578   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 04:41:03.946223   18442 logs.go:123] Gathering logs for dmesg ...
	I0819 04:41:03.946235   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 04:41:03.951148   18442 logs.go:123] Gathering logs for coredns [893df8098815] ...
	I0819 04:41:03.951158   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 893df8098815"
	I0819 04:41:03.963746   18442 logs.go:123] Gathering logs for etcd [0e52abab1282] ...
	I0819 04:41:03.963757   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e52abab1282"
	I0819 04:41:03.979116   18442 logs.go:123] Gathering logs for coredns [006414e2b5ad] ...
	I0819 04:41:03.979127   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 006414e2b5ad"
	I0819 04:41:03.992209   18442 logs.go:123] Gathering logs for coredns [18cd291ffa7e] ...
	I0819 04:41:03.992222   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18cd291ffa7e"
	I0819 04:41:04.004499   18442 logs.go:123] Gathering logs for kube-scheduler [7e19570fc95e] ...
	I0819 04:41:04.004515   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e19570fc95e"
	I0819 04:41:04.019931   18442 logs.go:123] Gathering logs for Docker ...
	I0819 04:41:04.019944   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 04:41:04.045047   18442 logs.go:123] Gathering logs for kubelet ...
	I0819 04:41:04.045062   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 04:41:04.085796   18442 logs.go:123] Gathering logs for kube-apiserver [3532d65b8a5a] ...
	I0819 04:41:04.085818   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3532d65b8a5a"
	I0819 04:41:04.100937   18442 logs.go:123] Gathering logs for storage-provisioner [74cf69aed6bf] ...
	I0819 04:41:04.100948   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74cf69aed6bf"
	I0819 04:41:04.114235   18442 logs.go:123] Gathering logs for describe nodes ...
	I0819 04:41:04.114248   18442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 04:41:04.151408   18442 logs.go:123] Gathering logs for kube-controller-manager [71655a8d5baa] ...
	I0819 04:41:04.151420   18442 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71655a8d5baa"
	I0819 04:41:06.672562   18442 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 04:41:11.675432   18442 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 04:41:11.681215   18442 out.go:201] 
	W0819 04:41:11.685273   18442 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 04:41:11.685299   18442 out.go:270] * 
	* 
	W0819 04:41:11.688142   18442 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:11.702130   18442 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-783000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.38s)

                                                
                                    
x
+
TestPause/serial/Start (10.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-792000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-792000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.969449417s)

                                                
                                                
-- stdout --
	* [pause-792000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-792000" primary control-plane node in "pause-792000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-792000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-792000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-792000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-792000 -n pause-792000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-792000 -n pause-792000: exit status 7 (55.481458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-792000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-227000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-227000 --driver=qemu2 : exit status 80 (9.729005375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-227000" primary control-plane node in "NoKubernetes-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-227000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000: exit status 7 (31.229417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --driver=qemu2 : exit status 80 (5.255380167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-227000
	* Restarting existing qemu2 VM for "NoKubernetes-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-227000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000: exit status 7 (63.419667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239922833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-227000
	* Restarting existing qemu2 VM for "NoKubernetes-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-227000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000: exit status 7 (64.563709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-227000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-227000 --driver=qemu2 : exit status 80 (5.2493845s)

                                                
                                                
-- stdout --
	* [NoKubernetes-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-227000
	* Restarting existing qemu2 VM for "NoKubernetes-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-227000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-227000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-227000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-227000 -n NoKubernetes-227000: exit status 7 (54.995667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.918024166s)

                                                
                                                
-- stdout --
	* [auto-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-714000" primary control-plane node in "auto-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:39:16.876824   18642 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:39:16.876949   18642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:16.876952   18642 out.go:358] Setting ErrFile to fd 2...
	I0819 04:39:16.876955   18642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:16.877075   18642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:39:16.878147   18642 out.go:352] Setting JSON to false
	I0819 04:39:16.894213   18642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9524,"bootTime":1724058032,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:39:16.894280   18642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:39:16.901256   18642 out.go:177] * [auto-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:39:16.908234   18642 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:39:16.908313   18642 notify.go:220] Checking for updates...
	I0819 04:39:16.916140   18642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:39:16.919186   18642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:39:16.922257   18642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:39:16.925161   18642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:39:16.928234   18642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:39:16.931607   18642 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:39:16.931672   18642 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:39:16.931715   18642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:39:16.935136   18642 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:39:16.942231   18642 start.go:297] selected driver: qemu2
	I0819 04:39:16.942239   18642 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:39:16.942249   18642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:39:16.944372   18642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:39:16.948178   18642 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:39:16.951278   18642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:39:16.951297   18642 cni.go:84] Creating CNI manager for ""
	I0819 04:39:16.951303   18642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:39:16.951307   18642 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:39:16.951329   18642 start.go:340] cluster config:
	{Name:auto-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:39:16.954665   18642 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:39:16.963213   18642 out.go:177] * Starting "auto-714000" primary control-plane node in "auto-714000" cluster
	I0819 04:39:16.967204   18642 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:39:16.967217   18642 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:39:16.967227   18642 cache.go:56] Caching tarball of preloaded images
	I0819 04:39:16.967276   18642 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:39:16.967281   18642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:39:16.967334   18642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/auto-714000/config.json ...
	I0819 04:39:16.967349   18642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/auto-714000/config.json: {Name:mkb322aa5e4648029e21bf1a965ab577ad9b8134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:39:16.967658   18642 start.go:360] acquireMachinesLock for auto-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:16.967689   18642 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "auto-714000"
	I0819 04:39:16.967702   18642 start.go:93] Provisioning new machine with config: &{Name:auto-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:16.967729   18642 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:16.972267   18642 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:16.987392   18642 start.go:159] libmachine.API.Create for "auto-714000" (driver="qemu2")
	I0819 04:39:16.987418   18642 client.go:168] LocalClient.Create starting
	I0819 04:39:16.987478   18642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:16.987514   18642 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:16.987525   18642 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:16.987563   18642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:16.987586   18642 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:16.987594   18642 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:16.987998   18642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:17.138672   18642 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:17.242999   18642 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:17.243009   18642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:17.243238   18642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2
	I0819 04:39:17.252798   18642 main.go:141] libmachine: STDOUT: 
	I0819 04:39:17.252832   18642 main.go:141] libmachine: STDERR: 
	I0819 04:39:17.252891   18642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2 +20000M
	I0819 04:39:17.261024   18642 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:17.261039   18642 main.go:141] libmachine: STDERR: 
	I0819 04:39:17.261064   18642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2
	I0819 04:39:17.261069   18642 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:17.261083   18642 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:17.261107   18642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:cf:5b:8d:d8:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2
	I0819 04:39:17.262756   18642 main.go:141] libmachine: STDOUT: 
	I0819 04:39:17.262773   18642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:17.262791   18642 client.go:171] duration metric: took 275.374916ms to LocalClient.Create
	I0819 04:39:19.264933   18642 start.go:128] duration metric: took 2.297230708s to createHost
	I0819 04:39:19.265008   18642 start.go:83] releasing machines lock for "auto-714000", held for 2.297362333s
	W0819 04:39:19.265117   18642 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:19.272146   18642 out.go:177] * Deleting "auto-714000" in qemu2 ...
	W0819 04:39:19.303911   18642 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:19.303942   18642 start.go:729] Will try again in 5 seconds ...
	I0819 04:39:24.306118   18642 start.go:360] acquireMachinesLock for auto-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:24.306706   18642 start.go:364] duration metric: took 476.75µs to acquireMachinesLock for "auto-714000"
	I0819 04:39:24.306779   18642 start.go:93] Provisioning new machine with config: &{Name:auto-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:24.307084   18642 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:24.310707   18642 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:24.359803   18642 start.go:159] libmachine.API.Create for "auto-714000" (driver="qemu2")
	I0819 04:39:24.359859   18642 client.go:168] LocalClient.Create starting
	I0819 04:39:24.359970   18642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:24.360035   18642 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:24.360052   18642 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:24.360112   18642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:24.360158   18642 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:24.360172   18642 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:24.360735   18642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:24.518792   18642 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:24.711570   18642 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:24.711582   18642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:24.711849   18642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2
	I0819 04:39:24.721691   18642 main.go:141] libmachine: STDOUT: 
	I0819 04:39:24.721712   18642 main.go:141] libmachine: STDERR: 
	I0819 04:39:24.721782   18642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2 +20000M
	I0819 04:39:24.729826   18642 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:24.729843   18642 main.go:141] libmachine: STDERR: 
	I0819 04:39:24.729854   18642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2
	I0819 04:39:24.729860   18642 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:24.729870   18642 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:24.729901   18642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:45:c5:77:0b:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/auto-714000/disk.qcow2
	I0819 04:39:24.731635   18642 main.go:141] libmachine: STDOUT: 
	I0819 04:39:24.731653   18642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:24.731665   18642 client.go:171] duration metric: took 371.807875ms to LocalClient.Create
	I0819 04:39:26.733830   18642 start.go:128] duration metric: took 2.426762083s to createHost
	I0819 04:39:26.733911   18642 start.go:83] releasing machines lock for "auto-714000", held for 2.427235417s
	W0819 04:39:26.734240   18642 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:26.747006   18642 out.go:201] 
	W0819 04:39:26.750012   18642 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:39:26.750029   18642 out.go:270] * 
	* 
	W0819 04:39:26.751485   18642 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:39:26.758955   18642 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.833048625s)

                                                
                                                
-- stdout --
	* [kindnet-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-714000" primary control-plane node in "kindnet-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:39:28.887933   18753 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:39:28.888054   18753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:28.888057   18753 out.go:358] Setting ErrFile to fd 2...
	I0819 04:39:28.888060   18753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:28.888205   18753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:39:28.889263   18753 out.go:352] Setting JSON to false
	I0819 04:39:28.905784   18753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9536,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:39:28.905883   18753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:39:28.912726   18753 out.go:177] * [kindnet-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:39:28.920707   18753 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:39:28.920779   18753 notify.go:220] Checking for updates...
	I0819 04:39:28.927695   18753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:39:28.930687   18753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:39:28.933663   18753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:39:28.936697   18753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:39:28.939685   18753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:39:28.941523   18753 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:39:28.941590   18753 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:39:28.941632   18753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:39:28.945681   18753 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:39:28.952507   18753 start.go:297] selected driver: qemu2
	I0819 04:39:28.952514   18753 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:39:28.952521   18753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:39:28.954816   18753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:39:28.958681   18753 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:39:28.962623   18753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:39:28.962655   18753 cni.go:84] Creating CNI manager for "kindnet"
	I0819 04:39:28.962659   18753 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 04:39:28.962681   18753 start.go:340] cluster config:
	{Name:kindnet-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:39:28.966396   18753 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:39:28.974718   18753 out.go:177] * Starting "kindnet-714000" primary control-plane node in "kindnet-714000" cluster
	I0819 04:39:28.978720   18753 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:39:28.978742   18753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:39:28.978754   18753 cache.go:56] Caching tarball of preloaded images
	I0819 04:39:28.978823   18753 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:39:28.978829   18753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:39:28.978895   18753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kindnet-714000/config.json ...
	I0819 04:39:28.978906   18753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kindnet-714000/config.json: {Name:mk06fdc876a1f1bc53d7d6f9b2a1350b37eccb98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:39:28.979250   18753 start.go:360] acquireMachinesLock for kindnet-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:28.979283   18753 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "kindnet-714000"
	I0819 04:39:28.979295   18753 start.go:93] Provisioning new machine with config: &{Name:kindnet-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:28.979322   18753 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:28.988634   18753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:29.005881   18753 start.go:159] libmachine.API.Create for "kindnet-714000" (driver="qemu2")
	I0819 04:39:29.005908   18753 client.go:168] LocalClient.Create starting
	I0819 04:39:29.005971   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:29.006005   18753 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:29.006016   18753 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:29.006056   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:29.006082   18753 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:29.006090   18753 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:29.006486   18753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:29.156538   18753 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:29.257494   18753 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:29.257507   18753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:29.257778   18753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2
	I0819 04:39:29.268290   18753 main.go:141] libmachine: STDOUT: 
	I0819 04:39:29.268314   18753 main.go:141] libmachine: STDERR: 
	I0819 04:39:29.268375   18753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2 +20000M
	I0819 04:39:29.277459   18753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:29.277486   18753 main.go:141] libmachine: STDERR: 
	I0819 04:39:29.277499   18753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2
	I0819 04:39:29.277505   18753 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:29.277514   18753 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:29.277548   18753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:af:04:03:27:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2
	I0819 04:39:29.279466   18753 main.go:141] libmachine: STDOUT: 
	I0819 04:39:29.279482   18753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:29.279501   18753 client.go:171] duration metric: took 273.593458ms to LocalClient.Create
	I0819 04:39:31.281743   18753 start.go:128] duration metric: took 2.302439417s to createHost
	I0819 04:39:31.281830   18753 start.go:83] releasing machines lock for "kindnet-714000", held for 2.302589833s
	W0819 04:39:31.281883   18753 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:31.298088   18753 out.go:177] * Deleting "kindnet-714000" in qemu2 ...
	W0819 04:39:31.326797   18753 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:31.326828   18753 start.go:729] Will try again in 5 seconds ...
	I0819 04:39:36.329082   18753 start.go:360] acquireMachinesLock for kindnet-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:36.329680   18753 start.go:364] duration metric: took 478.792µs to acquireMachinesLock for "kindnet-714000"
	I0819 04:39:36.329836   18753 start.go:93] Provisioning new machine with config: &{Name:kindnet-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:36.330117   18753 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:36.339759   18753 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:36.390064   18753 start.go:159] libmachine.API.Create for "kindnet-714000" (driver="qemu2")
	I0819 04:39:36.390114   18753 client.go:168] LocalClient.Create starting
	I0819 04:39:36.390245   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:36.390310   18753 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:36.390328   18753 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:36.390388   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:36.390433   18753 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:36.390447   18753 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:36.391135   18753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:36.553146   18753 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:36.636771   18753 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:36.636780   18753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:36.637028   18753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2
	I0819 04:39:36.646907   18753 main.go:141] libmachine: STDOUT: 
	I0819 04:39:36.646924   18753 main.go:141] libmachine: STDERR: 
	I0819 04:39:36.646973   18753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2 +20000M
	I0819 04:39:36.655104   18753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:36.655120   18753 main.go:141] libmachine: STDERR: 
	I0819 04:39:36.655130   18753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2
	I0819 04:39:36.655136   18753 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:36.655145   18753 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:36.655171   18753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:25:b4:6c:23:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kindnet-714000/disk.qcow2
	I0819 04:39:36.656817   18753 main.go:141] libmachine: STDOUT: 
	I0819 04:39:36.656832   18753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:36.656848   18753 client.go:171] duration metric: took 266.733417ms to LocalClient.Create
	I0819 04:39:38.658942   18753 start.go:128] duration metric: took 2.328848666s to createHost
	I0819 04:39:38.658985   18753 start.go:83] releasing machines lock for "kindnet-714000", held for 2.329337042s
	W0819 04:39:38.659095   18753 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:38.667330   18753 out.go:201] 
	W0819 04:39:38.675352   18753 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:39:38.675357   18753 out.go:270] * 
	* 
	W0819 04:39:38.675878   18753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:39:38.684359   18753 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.927323917s)

                                                
                                                
-- stdout --
	* [calico-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-714000" primary control-plane node in "calico-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:39:40.882501   18866 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:39:40.882637   18866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:40.882640   18866 out.go:358] Setting ErrFile to fd 2...
	I0819 04:39:40.882642   18866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:40.882775   18866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:39:40.883825   18866 out.go:352] Setting JSON to false
	I0819 04:39:40.899985   18866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9548,"bootTime":1724058032,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:39:40.900068   18866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:39:40.906933   18866 out.go:177] * [calico-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:39:40.916993   18866 notify.go:220] Checking for updates...
	I0819 04:39:40.921056   18866 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:39:40.925112   18866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:39:40.927998   18866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:39:40.932072   18866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:39:40.935117   18866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:39:40.938055   18866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:39:40.941493   18866 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:39:40.941555   18866 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:39:40.941597   18866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:39:40.946086   18866 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:39:40.953099   18866 start.go:297] selected driver: qemu2
	I0819 04:39:40.953106   18866 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:39:40.953113   18866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:39:40.955342   18866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:39:40.959114   18866 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:39:40.961878   18866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:39:40.961910   18866 cni.go:84] Creating CNI manager for "calico"
	I0819 04:39:40.961915   18866 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0819 04:39:40.961947   18866 start.go:340] cluster config:
	{Name:calico-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:39:40.965638   18866 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:39:40.974144   18866 out.go:177] * Starting "calico-714000" primary control-plane node in "calico-714000" cluster
	I0819 04:39:40.978086   18866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:39:40.978099   18866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:39:40.978107   18866 cache.go:56] Caching tarball of preloaded images
	I0819 04:39:40.978159   18866 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:39:40.978164   18866 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:39:40.978217   18866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/calico-714000/config.json ...
	I0819 04:39:40.978227   18866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/calico-714000/config.json: {Name:mk5135ef4c9d33d0c64b967bf57794a77388b23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:39:40.978572   18866 start.go:360] acquireMachinesLock for calico-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:40.978604   18866 start.go:364] duration metric: took 26.084µs to acquireMachinesLock for "calico-714000"
	I0819 04:39:40.978616   18866 start.go:93] Provisioning new machine with config: &{Name:calico-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:40.978640   18866 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:40.987174   18866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:41.004078   18866 start.go:159] libmachine.API.Create for "calico-714000" (driver="qemu2")
	I0819 04:39:41.004107   18866 client.go:168] LocalClient.Create starting
	I0819 04:39:41.004175   18866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:41.004207   18866 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:41.004217   18866 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:41.004260   18866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:41.004283   18866 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:41.004291   18866 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:41.004692   18866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:41.157593   18866 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:41.214363   18866 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:41.214370   18866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:41.214620   18866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2
	I0819 04:39:41.224278   18866 main.go:141] libmachine: STDOUT: 
	I0819 04:39:41.224302   18866 main.go:141] libmachine: STDERR: 
	I0819 04:39:41.224349   18866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2 +20000M
	I0819 04:39:41.232575   18866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:41.232588   18866 main.go:141] libmachine: STDERR: 
	I0819 04:39:41.232603   18866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2
	I0819 04:39:41.232608   18866 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:41.232621   18866 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:41.232644   18866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:22:32:3c:a2:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2
	I0819 04:39:41.234410   18866 main.go:141] libmachine: STDOUT: 
	I0819 04:39:41.234424   18866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:41.234449   18866 client.go:171] duration metric: took 229.933292ms to LocalClient.Create
	I0819 04:39:43.239762   18866 start.go:128] duration metric: took 2.257461625s to createHost
	I0819 04:39:43.239802   18866 start.go:83] releasing machines lock for "calico-714000", held for 2.257552167s
	W0819 04:39:43.239837   18866 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:43.247146   18866 out.go:177] * Deleting "calico-714000" in qemu2 ...
	W0819 04:39:43.272274   18866 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:43.272307   18866 start.go:729] Will try again in 5 seconds ...
	I0819 04:39:48.278949   18866 start.go:360] acquireMachinesLock for calico-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:48.279458   18866 start.go:364] duration metric: took 387.875µs to acquireMachinesLock for "calico-714000"
	I0819 04:39:48.279533   18866 start.go:93] Provisioning new machine with config: &{Name:calico-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:48.279869   18866 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:48.290538   18866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:48.338501   18866 start.go:159] libmachine.API.Create for "calico-714000" (driver="qemu2")
	I0819 04:39:48.338547   18866 client.go:168] LocalClient.Create starting
	I0819 04:39:48.338675   18866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:48.338743   18866 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:48.338761   18866 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:48.338852   18866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:48.338898   18866 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:48.338913   18866 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:48.339452   18866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:48.519182   18866 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:48.727183   18866 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:48.727194   18866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:48.727461   18866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2
	I0819 04:39:48.737377   18866 main.go:141] libmachine: STDOUT: 
	I0819 04:39:48.737398   18866 main.go:141] libmachine: STDERR: 
	I0819 04:39:48.737448   18866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2 +20000M
	I0819 04:39:48.745719   18866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:48.745748   18866 main.go:141] libmachine: STDERR: 
	I0819 04:39:48.745759   18866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2
	I0819 04:39:48.745763   18866 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:48.745776   18866 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:48.745803   18866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:2a:43:00:01:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/calico-714000/disk.qcow2
	I0819 04:39:48.747488   18866 main.go:141] libmachine: STDOUT: 
	I0819 04:39:48.747503   18866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:48.747515   18866 client.go:171] duration metric: took 408.537583ms to LocalClient.Create
	I0819 04:39:50.751571   18866 start.go:128] duration metric: took 2.469238583s to createHost
	I0819 04:39:50.751597   18866 start.go:83] releasing machines lock for "calico-714000", held for 2.469670417s
	W0819 04:39:50.751757   18866 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:50.758060   18866 out.go:201] 
	W0819 04:39:50.762124   18866 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:39:50.762141   18866 out.go:270] * 
	* 
	W0819 04:39:50.762852   18866 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:39:50.786172   18866 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.888113375s)

                                                
                                                
-- stdout --
	* [custom-flannel-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-714000" primary control-plane node in "custom-flannel-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:39:53.165401   18983 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:39:53.165550   18983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:53.165554   18983 out.go:358] Setting ErrFile to fd 2...
	I0819 04:39:53.165556   18983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:39:53.165722   18983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:39:53.167050   18983 out.go:352] Setting JSON to false
	I0819 04:39:53.185284   18983 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9561,"bootTime":1724058032,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:39:53.185386   18983 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:39:53.190697   18983 out.go:177] * [custom-flannel-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:39:53.198693   18983 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:39:53.198777   18983 notify.go:220] Checking for updates...
	I0819 04:39:53.204589   18983 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:39:53.207688   18983 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:39:53.210710   18983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:39:53.213705   18983 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:39:53.216646   18983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:39:53.220085   18983 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:39:53.220152   18983 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:39:53.220210   18983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:39:53.224661   18983 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:39:53.231730   18983 start.go:297] selected driver: qemu2
	I0819 04:39:53.231740   18983 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:39:53.231748   18983 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:39:53.234046   18983 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:39:53.237635   18983 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:39:53.240809   18983 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:39:53.240860   18983 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0819 04:39:53.240868   18983 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0819 04:39:53.240907   18983 start.go:340] cluster config:
	{Name:custom-flannel-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:39:53.244416   18983 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:39:53.251656   18983 out.go:177] * Starting "custom-flannel-714000" primary control-plane node in "custom-flannel-714000" cluster
	I0819 04:39:53.255666   18983 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:39:53.255683   18983 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:39:53.255692   18983 cache.go:56] Caching tarball of preloaded images
	I0819 04:39:53.255746   18983 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:39:53.255751   18983 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:39:53.255814   18983 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/custom-flannel-714000/config.json ...
	I0819 04:39:53.255826   18983 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/custom-flannel-714000/config.json: {Name:mk2fd5322dc2d69fab148e205e2abb2b91c598f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:39:53.256156   18983 start.go:360] acquireMachinesLock for custom-flannel-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:39:53.256187   18983 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "custom-flannel-714000"
	I0819 04:39:53.256199   18983 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:39:53.256233   18983 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:39:53.263723   18983 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:39:53.278633   18983 start.go:159] libmachine.API.Create for "custom-flannel-714000" (driver="qemu2")
	I0819 04:39:53.278664   18983 client.go:168] LocalClient.Create starting
	I0819 04:39:53.278745   18983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:39:53.278787   18983 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:53.278797   18983 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:53.278835   18983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:39:53.278859   18983 main.go:141] libmachine: Decoding PEM data...
	I0819 04:39:53.278866   18983 main.go:141] libmachine: Parsing certificate...
	I0819 04:39:53.279345   18983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:39:53.428210   18983 main.go:141] libmachine: Creating SSH key...
	I0819 04:39:53.569677   18983 main.go:141] libmachine: Creating Disk image...
	I0819 04:39:53.569684   18983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:39:53.570319   18983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2
	I0819 04:39:53.580032   18983 main.go:141] libmachine: STDOUT: 
	I0819 04:39:53.580057   18983 main.go:141] libmachine: STDERR: 
	I0819 04:39:53.580116   18983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2 +20000M
	I0819 04:39:53.588204   18983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:39:53.588221   18983 main.go:141] libmachine: STDERR: 
	I0819 04:39:53.588242   18983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2
	I0819 04:39:53.588260   18983 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:39:53.588273   18983 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:39:53.588298   18983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:8c:7d:5d:2d:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2
	I0819 04:39:53.589907   18983 main.go:141] libmachine: STDOUT: 
	I0819 04:39:53.589921   18983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:39:53.589944   18983 client.go:171] duration metric: took 311.0425ms to LocalClient.Create
	I0819 04:39:55.593628   18983 start.go:128] duration metric: took 2.335695292s to createHost
	I0819 04:39:55.593715   18983 start.go:83] releasing machines lock for "custom-flannel-714000", held for 2.33584975s
	W0819 04:39:55.593760   18983 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:55.606281   18983 out.go:177] * Deleting "custom-flannel-714000" in qemu2 ...
	W0819 04:39:55.631743   18983 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:39:55.631772   18983 start.go:729] Will try again in 5 seconds ...
	I0819 04:40:00.636838   18983 start.go:360] acquireMachinesLock for custom-flannel-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:00.637348   18983 start.go:364] duration metric: took 427.709µs to acquireMachinesLock for "custom-flannel-714000"
	I0819 04:40:00.637497   18983 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:00.637824   18983 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:00.647510   18983 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:00.698911   18983 start.go:159] libmachine.API.Create for "custom-flannel-714000" (driver="qemu2")
	I0819 04:40:00.698968   18983 client.go:168] LocalClient.Create starting
	I0819 04:40:00.699078   18983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:00.699140   18983 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:00.699154   18983 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:00.699224   18983 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:00.699270   18983 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:00.699281   18983 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:00.699855   18983 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:00.860054   18983 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:00.957352   18983 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:00.957362   18983 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:00.957643   18983 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2
	I0819 04:40:00.968110   18983 main.go:141] libmachine: STDOUT: 
	I0819 04:40:00.968132   18983 main.go:141] libmachine: STDERR: 
	I0819 04:40:00.968198   18983 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2 +20000M
	I0819 04:40:00.977551   18983 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:00.977575   18983 main.go:141] libmachine: STDERR: 
	I0819 04:40:00.977600   18983 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2
	I0819 04:40:00.977603   18983 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:00.977618   18983 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:00.977647   18983 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:69:3d:bb:35:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/custom-flannel-714000/disk.qcow2
	I0819 04:40:00.979663   18983 main.go:141] libmachine: STDOUT: 
	I0819 04:40:00.979681   18983 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:00.979694   18983 client.go:171] duration metric: took 280.589375ms to LocalClient.Create
	I0819 04:40:02.982700   18983 start.go:128] duration metric: took 2.343820917s to createHost
	I0819 04:40:02.982753   18983 start.go:83] releasing machines lock for "custom-flannel-714000", held for 2.34434925s
	W0819 04:40:02.983029   18983 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:03.000500   18983 out.go:201] 
	W0819 04:40:03.003503   18983 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:40:03.003559   18983 out.go:270] * 
	* 
	W0819 04:40:03.005915   18983 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:40:03.013386   18983 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.781081875s)

                                                
                                                
-- stdout --
	* [false-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-714000" primary control-plane node in "false-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:40:05.376979   19102 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:40:05.377134   19102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:05.377137   19102 out.go:358] Setting ErrFile to fd 2...
	I0819 04:40:05.377140   19102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:05.377263   19102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:40:05.378431   19102 out.go:352] Setting JSON to false
	I0819 04:40:05.395377   19102 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9573,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:40:05.395468   19102 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:40:05.401411   19102 out.go:177] * [false-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:40:05.409596   19102 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:40:05.409637   19102 notify.go:220] Checking for updates...
	I0819 04:40:05.418505   19102 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:40:05.421548   19102 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:40:05.425478   19102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:40:05.428509   19102 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:40:05.431544   19102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:40:05.434793   19102 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:40:05.434868   19102 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:40:05.434915   19102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:40:05.438669   19102 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:40:05.445485   19102 start.go:297] selected driver: qemu2
	I0819 04:40:05.445492   19102 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:40:05.445498   19102 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:40:05.447788   19102 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:40:05.451556   19102 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:40:05.454664   19102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:40:05.454720   19102 cni.go:84] Creating CNI manager for "false"
	I0819 04:40:05.454759   19102 start.go:340] cluster config:
	{Name:false-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:40:05.458216   19102 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:40:05.466506   19102 out.go:177] * Starting "false-714000" primary control-plane node in "false-714000" cluster
	I0819 04:40:05.470486   19102 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:40:05.470501   19102 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:40:05.470509   19102 cache.go:56] Caching tarball of preloaded images
	I0819 04:40:05.470571   19102 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:40:05.470576   19102 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:40:05.470646   19102 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/false-714000/config.json ...
	I0819 04:40:05.470657   19102 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/false-714000/config.json: {Name:mk8a02d00fde1a736f0baaa629661b61395d0a98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:40:05.471029   19102 start.go:360] acquireMachinesLock for false-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:05.471080   19102 start.go:364] duration metric: took 43.042µs to acquireMachinesLock for "false-714000"
	I0819 04:40:05.471096   19102 start.go:93] Provisioning new machine with config: &{Name:false-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:05.471126   19102 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:05.477547   19102 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:05.494274   19102 start.go:159] libmachine.API.Create for "false-714000" (driver="qemu2")
	I0819 04:40:05.494301   19102 client.go:168] LocalClient.Create starting
	I0819 04:40:05.494357   19102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:05.494386   19102 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:05.494399   19102 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:05.494440   19102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:05.494462   19102 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:05.494468   19102 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:05.494904   19102 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:05.647234   19102 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:05.693456   19102 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:05.693461   19102 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:05.693679   19102 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2
	I0819 04:40:05.703086   19102 main.go:141] libmachine: STDOUT: 
	I0819 04:40:05.703105   19102 main.go:141] libmachine: STDERR: 
	I0819 04:40:05.703154   19102 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2 +20000M
	I0819 04:40:05.711137   19102 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:05.711150   19102 main.go:141] libmachine: STDERR: 
	I0819 04:40:05.711162   19102 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2
	I0819 04:40:05.711167   19102 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:05.711178   19102 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:05.711205   19102 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a3:80:64:e3:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2
	I0819 04:40:05.713092   19102 main.go:141] libmachine: STDOUT: 
	I0819 04:40:05.713125   19102 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:05.713144   19102 client.go:171] duration metric: took 218.764333ms to LocalClient.Create
	I0819 04:40:07.716030   19102 start.go:128] duration metric: took 2.244146917s to createHost
	I0819 04:40:07.716129   19102 start.go:83] releasing machines lock for "false-714000", held for 2.24432775s
	W0819 04:40:07.716182   19102 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:07.729138   19102 out.go:177] * Deleting "false-714000" in qemu2 ...
	W0819 04:40:07.754338   19102 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:07.754368   19102 start.go:729] Will try again in 5 seconds ...
	I0819 04:40:12.757797   19102 start.go:360] acquireMachinesLock for false-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:12.758387   19102 start.go:364] duration metric: took 484.833µs to acquireMachinesLock for "false-714000"
	I0819 04:40:12.758544   19102 start.go:93] Provisioning new machine with config: &{Name:false-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:12.758913   19102 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:12.768703   19102 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:12.810449   19102 start.go:159] libmachine.API.Create for "false-714000" (driver="qemu2")
	I0819 04:40:12.810511   19102 client.go:168] LocalClient.Create starting
	I0819 04:40:12.810623   19102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:12.810684   19102 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:12.810697   19102 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:12.810757   19102 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:12.810795   19102 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:12.810813   19102 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:12.811483   19102 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:12.968817   19102 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:13.067284   19102 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:13.067290   19102 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:13.067494   19102 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2
	I0819 04:40:13.076775   19102 main.go:141] libmachine: STDOUT: 
	I0819 04:40:13.076804   19102 main.go:141] libmachine: STDERR: 
	I0819 04:40:13.076861   19102 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2 +20000M
	I0819 04:40:13.084900   19102 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:13.084923   19102 main.go:141] libmachine: STDERR: 
	I0819 04:40:13.084937   19102 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2
	I0819 04:40:13.084940   19102 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:13.084952   19102 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:13.084982   19102 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:76:8f:56:f0:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/false-714000/disk.qcow2
	I0819 04:40:13.086689   19102 main.go:141] libmachine: STDOUT: 
	I0819 04:40:13.086715   19102 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:13.086728   19102 client.go:171] duration metric: took 276.154042ms to LocalClient.Create
	I0819 04:40:15.089317   19102 start.go:128] duration metric: took 2.329916625s to createHost
	I0819 04:40:15.089390   19102 start.go:83] releasing machines lock for "false-714000", held for 2.330530917s
	W0819 04:40:15.089809   19102 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:15.101523   19102 out.go:201] 
	W0819 04:40:15.106567   19102 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:40:15.106601   19102 out.go:270] * 
	* 
	W0819 04:40:15.109062   19102 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:40:15.117462   19102 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.768203292s)

                                                
                                                
-- stdout --
	* [enable-default-cni-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-714000" primary control-plane node in "enable-default-cni-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:40:17.336062   19213 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:40:17.336184   19213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:17.336187   19213 out.go:358] Setting ErrFile to fd 2...
	I0819 04:40:17.336189   19213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:17.336349   19213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:40:17.337434   19213 out.go:352] Setting JSON to false
	I0819 04:40:17.353942   19213 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9585,"bootTime":1724058032,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:40:17.354015   19213 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:40:17.359913   19213 out.go:177] * [enable-default-cni-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:40:17.367108   19213 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:40:17.367132   19213 notify.go:220] Checking for updates...
	I0819 04:40:17.374025   19213 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:40:17.377025   19213 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:40:17.381011   19213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:40:17.384010   19213 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:40:17.387042   19213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:40:17.390352   19213 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:40:17.390421   19213 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:40:17.390471   19213 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:40:17.394041   19213 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:40:17.401023   19213 start.go:297] selected driver: qemu2
	I0819 04:40:17.401031   19213 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:40:17.401040   19213 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:40:17.403452   19213 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:40:17.408046   19213 out.go:177] * Automatically selected the socket_vmnet network
	E0819 04:40:17.411109   19213 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0819 04:40:17.411129   19213 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:40:17.411169   19213 cni.go:84] Creating CNI manager for "bridge"
	I0819 04:40:17.411176   19213 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:40:17.411206   19213 start.go:340] cluster config:
	{Name:enable-default-cni-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:40:17.414835   19213 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:40:17.423039   19213 out.go:177] * Starting "enable-default-cni-714000" primary control-plane node in "enable-default-cni-714000" cluster
	I0819 04:40:17.425995   19213 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:40:17.426009   19213 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:40:17.426023   19213 cache.go:56] Caching tarball of preloaded images
	I0819 04:40:17.426079   19213 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:40:17.426084   19213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:40:17.426151   19213 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/enable-default-cni-714000/config.json ...
	I0819 04:40:17.426162   19213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/enable-default-cni-714000/config.json: {Name:mk14643f7ed47683dfa7c4bf7390995157332a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:40:17.426510   19213 start.go:360] acquireMachinesLock for enable-default-cni-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:17.426549   19213 start.go:364] duration metric: took 32.042µs to acquireMachinesLock for "enable-default-cni-714000"
	I0819 04:40:17.426562   19213 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:17.426595   19213 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:17.430051   19213 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:17.446795   19213 start.go:159] libmachine.API.Create for "enable-default-cni-714000" (driver="qemu2")
	I0819 04:40:17.446816   19213 client.go:168] LocalClient.Create starting
	I0819 04:40:17.446873   19213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:17.446901   19213 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:17.446922   19213 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:17.446954   19213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:17.446977   19213 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:17.446985   19213 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:17.447382   19213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:17.596928   19213 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:17.717958   19213 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:17.717966   19213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:17.718194   19213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2
	I0819 04:40:17.727622   19213 main.go:141] libmachine: STDOUT: 
	I0819 04:40:17.727640   19213 main.go:141] libmachine: STDERR: 
	I0819 04:40:17.727687   19213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2 +20000M
	I0819 04:40:17.735835   19213 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:17.735857   19213 main.go:141] libmachine: STDERR: 
	I0819 04:40:17.735870   19213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2
	I0819 04:40:17.735875   19213 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:17.735887   19213 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:17.735910   19213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:e8:3b:b8:cb:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2
	I0819 04:40:17.737636   19213 main.go:141] libmachine: STDOUT: 
	I0819 04:40:17.737649   19213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:17.737666   19213 client.go:171] duration metric: took 290.803291ms to LocalClient.Create
	I0819 04:40:19.740145   19213 start.go:128] duration metric: took 2.313203625s to createHost
	I0819 04:40:19.740245   19213 start.go:83] releasing machines lock for "enable-default-cni-714000", held for 2.313368416s
	W0819 04:40:19.740321   19213 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:19.747684   19213 out.go:177] * Deleting "enable-default-cni-714000" in qemu2 ...
	W0819 04:40:19.779743   19213 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:19.779771   19213 start.go:729] Will try again in 5 seconds ...
	I0819 04:40:24.782497   19213 start.go:360] acquireMachinesLock for enable-default-cni-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:24.782640   19213 start.go:364] duration metric: took 105.792µs to acquireMachinesLock for "enable-default-cni-714000"
	I0819 04:40:24.782657   19213 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:24.782701   19213 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:24.790942   19213 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:24.806702   19213 start.go:159] libmachine.API.Create for "enable-default-cni-714000" (driver="qemu2")
	I0819 04:40:24.806726   19213 client.go:168] LocalClient.Create starting
	I0819 04:40:24.806799   19213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:24.806843   19213 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:24.806852   19213 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:24.806890   19213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:24.806913   19213 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:24.806920   19213 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:24.807209   19213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:24.957038   19213 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:25.006204   19213 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:25.006211   19213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:25.006466   19213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2
	I0819 04:40:25.015803   19213 main.go:141] libmachine: STDOUT: 
	I0819 04:40:25.015833   19213 main.go:141] libmachine: STDERR: 
	I0819 04:40:25.015886   19213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2 +20000M
	I0819 04:40:25.023845   19213 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:25.023861   19213 main.go:141] libmachine: STDERR: 
	I0819 04:40:25.023875   19213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2
	I0819 04:40:25.023880   19213 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:25.023892   19213 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:25.023917   19213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ac:23:2b:4b:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/enable-default-cni-714000/disk.qcow2
	I0819 04:40:25.025521   19213 main.go:141] libmachine: STDOUT: 
	I0819 04:40:25.025535   19213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:25.025548   19213 client.go:171] duration metric: took 218.799916ms to LocalClient.Create
	I0819 04:40:27.027938   19213 start.go:128] duration metric: took 2.2450235s to createHost
	I0819 04:40:27.028035   19213 start.go:83] releasing machines lock for "enable-default-cni-714000", held for 2.245205833s
	W0819 04:40:27.028470   19213 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:27.044074   19213 out.go:201] 
	W0819 04:40:27.048402   19213 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:40:27.048427   19213 out.go:270] * 
	* 
	W0819 04:40:27.051142   19213 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:40:27.063271   19213 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.984820417s)

                                                
                                                
-- stdout --
	* [flannel-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-714000" primary control-plane node in "flannel-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:40:29.261957   19325 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:40:29.262084   19325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:29.262088   19325 out.go:358] Setting ErrFile to fd 2...
	I0819 04:40:29.262090   19325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:29.262236   19325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:40:29.263211   19325 out.go:352] Setting JSON to false
	I0819 04:40:29.279633   19325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9597,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:40:29.279700   19325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:40:29.286297   19325 out.go:177] * [flannel-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:40:29.293131   19325 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:40:29.293166   19325 notify.go:220] Checking for updates...
	I0819 04:40:29.300134   19325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:40:29.303114   19325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:40:29.306106   19325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:40:29.309120   19325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:40:29.314106   19325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:40:29.318535   19325 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:40:29.318598   19325 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:40:29.318663   19325 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:40:29.322088   19325 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:40:29.328175   19325 start.go:297] selected driver: qemu2
	I0819 04:40:29.328185   19325 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:40:29.328192   19325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:40:29.330384   19325 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:40:29.334149   19325 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:40:29.337211   19325 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:40:29.337227   19325 cni.go:84] Creating CNI manager for "flannel"
	I0819 04:40:29.337231   19325 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0819 04:40:29.337265   19325 start.go:340] cluster config:
	{Name:flannel-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:40:29.340633   19325 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:40:29.348197   19325 out.go:177] * Starting "flannel-714000" primary control-plane node in "flannel-714000" cluster
	I0819 04:40:29.352083   19325 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:40:29.352097   19325 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:40:29.352109   19325 cache.go:56] Caching tarball of preloaded images
	I0819 04:40:29.352170   19325 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:40:29.352175   19325 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:40:29.352229   19325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/flannel-714000/config.json ...
	I0819 04:40:29.352239   19325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/flannel-714000/config.json: {Name:mkd13237dbf03e3efeb5a650358da8274ce8aa52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:40:29.352572   19325 start.go:360] acquireMachinesLock for flannel-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:29.352605   19325 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "flannel-714000"
	I0819 04:40:29.352618   19325 start.go:93] Provisioning new machine with config: &{Name:flannel-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:29.352650   19325 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:29.356136   19325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:29.371959   19325 start.go:159] libmachine.API.Create for "flannel-714000" (driver="qemu2")
	I0819 04:40:29.371982   19325 client.go:168] LocalClient.Create starting
	I0819 04:40:29.372055   19325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:29.372083   19325 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:29.372092   19325 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:29.372126   19325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:29.372154   19325 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:29.372162   19325 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:29.372598   19325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:29.525125   19325 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:29.730157   19325 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:29.730169   19325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:29.730412   19325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2
	I0819 04:40:29.740431   19325 main.go:141] libmachine: STDOUT: 
	I0819 04:40:29.740448   19325 main.go:141] libmachine: STDERR: 
	I0819 04:40:29.740498   19325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2 +20000M
	I0819 04:40:29.748424   19325 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:29.748441   19325 main.go:141] libmachine: STDERR: 
	I0819 04:40:29.748458   19325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2
	I0819 04:40:29.748466   19325 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:29.748479   19325 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:29.748503   19325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:44:9c:c8:80:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2
	I0819 04:40:29.750165   19325 main.go:141] libmachine: STDOUT: 
	I0819 04:40:29.750179   19325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:29.750194   19325 client.go:171] duration metric: took 378.185875ms to LocalClient.Create
	I0819 04:40:31.752513   19325 start.go:128] duration metric: took 2.399703125s to createHost
	I0819 04:40:31.752616   19325 start.go:83] releasing machines lock for "flannel-714000", held for 2.399871958s
	W0819 04:40:31.752682   19325 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:31.765004   19325 out.go:177] * Deleting "flannel-714000" in qemu2 ...
	W0819 04:40:31.792251   19325 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:31.792276   19325 start.go:729] Will try again in 5 seconds ...
	I0819 04:40:36.794705   19325 start.go:360] acquireMachinesLock for flannel-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:36.795224   19325 start.go:364] duration metric: took 436.625µs to acquireMachinesLock for "flannel-714000"
	I0819 04:40:36.795355   19325 start.go:93] Provisioning new machine with config: &{Name:flannel-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:36.795690   19325 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:36.801436   19325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:36.846649   19325 start.go:159] libmachine.API.Create for "flannel-714000" (driver="qemu2")
	I0819 04:40:36.846709   19325 client.go:168] LocalClient.Create starting
	I0819 04:40:36.846851   19325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:36.846921   19325 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:36.846940   19325 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:36.847007   19325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:36.847051   19325 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:36.847069   19325 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:36.847595   19325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:37.006372   19325 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:37.152096   19325 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:37.152107   19325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:37.152360   19325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2
	I0819 04:40:37.162056   19325 main.go:141] libmachine: STDOUT: 
	I0819 04:40:37.162078   19325 main.go:141] libmachine: STDERR: 
	I0819 04:40:37.162159   19325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2 +20000M
	I0819 04:40:37.171423   19325 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:37.171446   19325 main.go:141] libmachine: STDERR: 
	I0819 04:40:37.171463   19325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2
	I0819 04:40:37.171468   19325 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:37.171482   19325 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:37.171522   19325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:f1:3e:ce:42:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/flannel-714000/disk.qcow2
	I0819 04:40:37.173389   19325 main.go:141] libmachine: STDOUT: 
	I0819 04:40:37.173403   19325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:37.173419   19325 client.go:171] duration metric: took 326.694167ms to LocalClient.Create
	I0819 04:40:39.175586   19325 start.go:128] duration metric: took 2.379809166s to createHost
	I0819 04:40:39.175662   19325 start.go:83] releasing machines lock for "flannel-714000", held for 2.380350458s
	W0819 04:40:39.175880   19325 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:39.187317   19325 out.go:201] 
	W0819 04:40:39.192390   19325 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:40:39.192410   19325 out.go:270] * 
	* 
	W0819 04:40:39.193744   19325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:40:39.205340   19325 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.726820875s)

                                                
                                                
-- stdout --
	* [bridge-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-714000" primary control-plane node in "bridge-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:40:41.624217   19442 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:40:41.624343   19442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:41.624346   19442 out.go:358] Setting ErrFile to fd 2...
	I0819 04:40:41.624348   19442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:41.624479   19442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:40:41.625534   19442 out.go:352] Setting JSON to false
	I0819 04:40:41.641969   19442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9609,"bootTime":1724058032,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:40:41.642035   19442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:40:41.649478   19442 out.go:177] * [bridge-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:40:41.657423   19442 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:40:41.657496   19442 notify.go:220] Checking for updates...
	I0819 04:40:41.665312   19442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:40:41.668405   19442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:40:41.671476   19442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:40:41.674368   19442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:40:41.677424   19442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:40:41.680806   19442 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:40:41.680873   19442 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:40:41.680930   19442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:40:41.684346   19442 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:40:41.691456   19442 start.go:297] selected driver: qemu2
	I0819 04:40:41.691466   19442 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:40:41.691474   19442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:40:41.693773   19442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:40:41.697352   19442 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:40:41.700481   19442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:40:41.700501   19442 cni.go:84] Creating CNI manager for "bridge"
	I0819 04:40:41.700505   19442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:40:41.700531   19442 start.go:340] cluster config:
	{Name:bridge-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:40:41.704245   19442 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:40:41.712349   19442 out.go:177] * Starting "bridge-714000" primary control-plane node in "bridge-714000" cluster
	I0819 04:40:41.716373   19442 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:40:41.716387   19442 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:40:41.716396   19442 cache.go:56] Caching tarball of preloaded images
	I0819 04:40:41.716453   19442 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:40:41.716459   19442 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:40:41.716524   19442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/bridge-714000/config.json ...
	I0819 04:40:41.716535   19442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/bridge-714000/config.json: {Name:mkfe646e84df52acaad5a69c701659c699e2f940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:40:41.716774   19442 start.go:360] acquireMachinesLock for bridge-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:41.716810   19442 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "bridge-714000"
	I0819 04:40:41.716824   19442 start.go:93] Provisioning new machine with config: &{Name:bridge-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:41.716857   19442 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:41.725386   19442 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:41.743034   19442 start.go:159] libmachine.API.Create for "bridge-714000" (driver="qemu2")
	I0819 04:40:41.743066   19442 client.go:168] LocalClient.Create starting
	I0819 04:40:41.743153   19442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:41.743184   19442 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:41.743192   19442 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:41.743237   19442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:41.743262   19442 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:41.743273   19442 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:41.743644   19442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:41.892638   19442 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:41.926678   19442 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:41.926684   19442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:41.926908   19442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2
	I0819 04:40:41.936151   19442 main.go:141] libmachine: STDOUT: 
	I0819 04:40:41.936173   19442 main.go:141] libmachine: STDERR: 
	I0819 04:40:41.936243   19442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2 +20000M
	I0819 04:40:41.944432   19442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:41.944448   19442 main.go:141] libmachine: STDERR: 
	I0819 04:40:41.944469   19442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2
	I0819 04:40:41.944474   19442 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:41.944487   19442 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:41.944516   19442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:29:bb:7f:ef:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2
	I0819 04:40:41.946219   19442 main.go:141] libmachine: STDOUT: 
	I0819 04:40:41.946233   19442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:41.946254   19442 client.go:171] duration metric: took 203.175791ms to LocalClient.Create
	I0819 04:40:43.948529   19442 start.go:128] duration metric: took 2.231611375s to createHost
	I0819 04:40:43.948637   19442 start.go:83] releasing machines lock for "bridge-714000", held for 2.231744917s
	W0819 04:40:43.948711   19442 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:43.962954   19442 out.go:177] * Deleting "bridge-714000" in qemu2 ...
	W0819 04:40:43.987538   19442 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:43.987561   19442 start.go:729] Will try again in 5 seconds ...
	I0819 04:40:48.989780   19442 start.go:360] acquireMachinesLock for bridge-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:48.990436   19442 start.go:364] duration metric: took 506.625µs to acquireMachinesLock for "bridge-714000"
	I0819 04:40:48.990561   19442 start.go:93] Provisioning new machine with config: &{Name:bridge-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:48.990899   19442 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:48.996599   19442 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:49.039830   19442 start.go:159] libmachine.API.Create for "bridge-714000" (driver="qemu2")
	I0819 04:40:49.039876   19442 client.go:168] LocalClient.Create starting
	I0819 04:40:49.039980   19442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:49.040060   19442 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:49.040079   19442 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:49.040137   19442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:49.040182   19442 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:49.040207   19442 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:49.040664   19442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:49.200638   19442 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:49.260617   19442 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:49.260629   19442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:49.260872   19442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2
	I0819 04:40:49.271209   19442 main.go:141] libmachine: STDOUT: 
	I0819 04:40:49.271235   19442 main.go:141] libmachine: STDERR: 
	I0819 04:40:49.271304   19442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2 +20000M
	I0819 04:40:49.279809   19442 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:49.279825   19442 main.go:141] libmachine: STDERR: 
	I0819 04:40:49.279839   19442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2
	I0819 04:40:49.279842   19442 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:49.279853   19442 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:49.279881   19442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:64:6b:82:a9:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/bridge-714000/disk.qcow2
	I0819 04:40:49.281541   19442 main.go:141] libmachine: STDOUT: 
	I0819 04:40:49.281564   19442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:49.281577   19442 client.go:171] duration metric: took 241.695084ms to LocalClient.Create
	I0819 04:40:51.283687   19442 start.go:128] duration metric: took 2.292760416s to createHost
	I0819 04:40:51.283725   19442 start.go:83] releasing machines lock for "bridge-714000", held for 2.293225958s
	W0819 04:40:51.283963   19442 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:51.294383   19442 out.go:201] 
	W0819 04:40:51.298443   19442 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:40:51.298459   19442 out.go:270] * 
	* 
	W0819 04:40:51.299235   19442 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:40:51.314348   19442 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-714000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.88293475s)

                                                
                                                
-- stdout --
	* [kubenet-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-714000" primary control-plane node in "kubenet-714000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-714000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:40:53.482712   19551 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:40:53.482855   19551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:53.482858   19551 out.go:358] Setting ErrFile to fd 2...
	I0819 04:40:53.482866   19551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:40:53.482995   19551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:40:53.484153   19551 out.go:352] Setting JSON to false
	I0819 04:40:53.501128   19551 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9621,"bootTime":1724058032,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:40:53.501207   19551 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:40:53.505140   19551 out.go:177] * [kubenet-714000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:40:53.512228   19551 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:40:53.512338   19551 notify.go:220] Checking for updates...
	I0819 04:40:53.519127   19551 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:40:53.522198   19551 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:40:53.525086   19551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:40:53.528140   19551 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:40:53.531148   19551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:40:53.532799   19551 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:40:53.532862   19551 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:40:53.532907   19551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:40:53.536188   19551 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:40:53.542984   19551 start.go:297] selected driver: qemu2
	I0819 04:40:53.542991   19551 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:40:53.542996   19551 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:40:53.545165   19551 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:40:53.548128   19551 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:40:53.553333   19551 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:40:53.553389   19551 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0819 04:40:53.553420   19551 start.go:340] cluster config:
	{Name:kubenet-714000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:40:53.556967   19551 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:40:53.565157   19551 out.go:177] * Starting "kubenet-714000" primary control-plane node in "kubenet-714000" cluster
	I0819 04:40:53.569130   19551 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:40:53.569145   19551 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:40:53.569156   19551 cache.go:56] Caching tarball of preloaded images
	I0819 04:40:53.569210   19551 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:40:53.569215   19551 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:40:53.569270   19551 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kubenet-714000/config.json ...
	I0819 04:40:53.569280   19551 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/kubenet-714000/config.json: {Name:mkd69743d3f9a7fd4d3a46dea0e2fe4e5d47dc3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:40:53.569623   19551 start.go:360] acquireMachinesLock for kubenet-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:40:53.569655   19551 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "kubenet-714000"
	I0819 04:40:53.569668   19551 start.go:93] Provisioning new machine with config: &{Name:kubenet-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:40:53.569703   19551 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:40:53.577168   19551 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:40:53.594198   19551 start.go:159] libmachine.API.Create for "kubenet-714000" (driver="qemu2")
	I0819 04:40:53.594228   19551 client.go:168] LocalClient.Create starting
	I0819 04:40:53.594299   19551 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:40:53.594329   19551 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:53.594339   19551 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:53.594378   19551 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:40:53.594401   19551 main.go:141] libmachine: Decoding PEM data...
	I0819 04:40:53.594412   19551 main.go:141] libmachine: Parsing certificate...
	I0819 04:40:53.594892   19551 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:40:53.744195   19551 main.go:141] libmachine: Creating SSH key...
	I0819 04:40:53.879508   19551 main.go:141] libmachine: Creating Disk image...
	I0819 04:40:53.879515   19551 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:40:53.880020   19551 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2
	I0819 04:40:53.889634   19551 main.go:141] libmachine: STDOUT: 
	I0819 04:40:53.889660   19551 main.go:141] libmachine: STDERR: 
	I0819 04:40:53.889725   19551 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2 +20000M
	I0819 04:40:53.897870   19551 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:40:53.897886   19551 main.go:141] libmachine: STDERR: 
	I0819 04:40:53.897904   19551 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2
	I0819 04:40:53.897910   19551 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:40:53.897920   19551 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:40:53.897943   19551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f5:9e:3e:4d:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2
	I0819 04:40:53.899611   19551 main.go:141] libmachine: STDOUT: 
	I0819 04:40:53.899624   19551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:40:53.899645   19551 client.go:171] duration metric: took 305.41275ms to LocalClient.Create
	I0819 04:40:55.901822   19551 start.go:128] duration metric: took 2.3320985s to createHost
	I0819 04:40:55.901894   19551 start.go:83] releasing machines lock for "kubenet-714000", held for 2.332235208s
	W0819 04:40:55.901948   19551 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:55.912821   19551 out.go:177] * Deleting "kubenet-714000" in qemu2 ...
	W0819 04:40:55.941986   19551 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:40:55.942023   19551 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:00.944236   19551 start.go:360] acquireMachinesLock for kubenet-714000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:00.944803   19551 start.go:364] duration metric: took 462.584µs to acquireMachinesLock for "kubenet-714000"
	I0819 04:41:00.944875   19551 start.go:93] Provisioning new machine with config: &{Name:kubenet-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:00.945115   19551 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:00.956732   19551 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 04:41:01.004958   19551 start.go:159] libmachine.API.Create for "kubenet-714000" (driver="qemu2")
	I0819 04:41:01.005040   19551 client.go:168] LocalClient.Create starting
	I0819 04:41:01.005152   19551 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:01.005209   19551 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:01.005231   19551 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:01.005292   19551 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:01.005337   19551 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:01.005352   19551 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:01.005922   19551 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:01.164180   19551 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:01.269285   19551 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:01.269292   19551 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:01.269526   19551 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2
	I0819 04:41:01.279283   19551 main.go:141] libmachine: STDOUT: 
	I0819 04:41:01.279299   19551 main.go:141] libmachine: STDERR: 
	I0819 04:41:01.279425   19551 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2 +20000M
	I0819 04:41:01.287437   19551 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:01.287451   19551 main.go:141] libmachine: STDERR: 
	I0819 04:41:01.287461   19551 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2
	I0819 04:41:01.287466   19551 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:01.287499   19551 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:01.287520   19551 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:b7:55:7e:b1:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/kubenet-714000/disk.qcow2
	I0819 04:41:01.289237   19551 main.go:141] libmachine: STDOUT: 
	I0819 04:41:01.289253   19551 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:01.289266   19551 client.go:171] duration metric: took 284.220625ms to LocalClient.Create
	I0819 04:41:03.291361   19551 start.go:128] duration metric: took 2.346231875s to createHost
	I0819 04:41:03.291404   19551 start.go:83] releasing machines lock for "kubenet-714000", held for 2.346601667s
	W0819 04:41:03.291605   19551 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-714000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:03.307049   19551 out.go:201] 
	W0819 04:41:03.311076   19551 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:03.311089   19551 out.go:270] * 
	* 
	W0819 04:41:03.312499   19551 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:03.328066   19551 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-916000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-916000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.692977875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-916000" primary control-plane node in "old-k8s-version-916000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-916000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:05.546001   19660 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:05.546132   19660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:05.546136   19660 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:05.546138   19660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:05.546319   19660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:05.547485   19660 out.go:352] Setting JSON to false
	I0819 04:41:05.564021   19660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9633,"bootTime":1724058032,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:05.564094   19660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:05.570036   19660 out.go:177] * [old-k8s-version-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:05.578068   19660 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:05.578127   19660 notify.go:220] Checking for updates...
	I0819 04:41:05.585020   19660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:05.588941   19660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:05.591987   19660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:05.594981   19660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:05.597964   19660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:05.602265   19660 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:05.602328   19660 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:41:05.602367   19660 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:05.604983   19660 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:41:05.611975   19660 start.go:297] selected driver: qemu2
	I0819 04:41:05.611982   19660 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:41:05.611988   19660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:05.614082   19660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:41:05.618020   19660 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:41:05.621043   19660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:05.621073   19660 cni.go:84] Creating CNI manager for ""
	I0819 04:41:05.621079   19660 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:41:05.621110   19660 start.go:340] cluster config:
	{Name:old-k8s-version-916000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:05.624410   19660 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:05.632847   19660 out.go:177] * Starting "old-k8s-version-916000" primary control-plane node in "old-k8s-version-916000" cluster
	I0819 04:41:05.636994   19660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:41:05.637011   19660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:41:05.637017   19660 cache.go:56] Caching tarball of preloaded images
	I0819 04:41:05.637066   19660 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:41:05.637072   19660 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:41:05.637127   19660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/old-k8s-version-916000/config.json ...
	I0819 04:41:05.637137   19660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/old-k8s-version-916000/config.json: {Name:mk0730e9d9817b99137140f726faceb494912e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:41:05.637468   19660 start.go:360] acquireMachinesLock for old-k8s-version-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:05.637500   19660 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "old-k8s-version-916000"
	I0819 04:41:05.637512   19660 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:05.637539   19660 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:05.642042   19660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:05.657755   19660 start.go:159] libmachine.API.Create for "old-k8s-version-916000" (driver="qemu2")
	I0819 04:41:05.657782   19660 client.go:168] LocalClient.Create starting
	I0819 04:41:05.657847   19660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:05.657880   19660 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:05.657888   19660 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:05.657927   19660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:05.657953   19660 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:05.657959   19660 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:05.658273   19660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:05.807101   19660 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:05.838888   19660 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:05.838893   19660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:05.839108   19660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:05.848333   19660 main.go:141] libmachine: STDOUT: 
	I0819 04:41:05.848355   19660 main.go:141] libmachine: STDERR: 
	I0819 04:41:05.848406   19660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2 +20000M
	I0819 04:41:05.856475   19660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:05.856495   19660 main.go:141] libmachine: STDERR: 
	I0819 04:41:05.856511   19660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:05.856517   19660 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:05.856533   19660 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:05.856555   19660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:b9:c6:64:65:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:05.858312   19660 main.go:141] libmachine: STDOUT: 
	I0819 04:41:05.858339   19660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:05.858358   19660 client.go:171] duration metric: took 200.574083ms to LocalClient.Create
	I0819 04:41:07.860530   19660 start.go:128] duration metric: took 2.22298475s to createHost
	I0819 04:41:07.860607   19660 start.go:83] releasing machines lock for "old-k8s-version-916000", held for 2.223122s
	W0819 04:41:07.860649   19660 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:07.872323   19660 out.go:177] * Deleting "old-k8s-version-916000" in qemu2 ...
	W0819 04:41:07.896044   19660 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:07.896077   19660 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:12.898128   19660 start.go:360] acquireMachinesLock for old-k8s-version-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:12.898327   19660 start.go:364] duration metric: took 157.25µs to acquireMachinesLock for "old-k8s-version-916000"
	I0819 04:41:12.898348   19660 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:12.898449   19660 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:12.908594   19660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:12.930596   19660 start.go:159] libmachine.API.Create for "old-k8s-version-916000" (driver="qemu2")
	I0819 04:41:12.930633   19660 client.go:168] LocalClient.Create starting
	I0819 04:41:12.930701   19660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:12.930743   19660 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:12.930755   19660 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:12.930803   19660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:12.930836   19660 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:12.930847   19660 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:12.931282   19660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:13.082798   19660 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:13.145088   19660 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:13.145094   19660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:13.145326   19660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:13.154762   19660 main.go:141] libmachine: STDOUT: 
	I0819 04:41:13.154781   19660 main.go:141] libmachine: STDERR: 
	I0819 04:41:13.154841   19660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2 +20000M
	I0819 04:41:13.162919   19660 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:13.162933   19660 main.go:141] libmachine: STDERR: 
	I0819 04:41:13.162945   19660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:13.162951   19660 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:13.162963   19660 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:13.162995   19660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:bd:71:3e:87:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:13.164649   19660 main.go:141] libmachine: STDOUT: 
	I0819 04:41:13.164684   19660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:13.164697   19660 client.go:171] duration metric: took 234.063ms to LocalClient.Create
	I0819 04:41:15.166961   19660 start.go:128] duration metric: took 2.268427s to createHost
	I0819 04:41:15.167056   19660 start.go:83] releasing machines lock for "old-k8s-version-916000", held for 2.268745125s
	W0819 04:41:15.167463   19660 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:15.178078   19660 out.go:201] 
	W0819 04:41:15.185247   19660 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:15.185271   19660 out.go:270] * 
	* 
	W0819 04:41:15.187858   19660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:15.199100   19660 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-916000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (63.846292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-916000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-916000 create -f testdata/busybox.yaml: exit status 1 (30.178541ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-916000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-916000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (29.281833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (30.089833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-916000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-916000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-916000 describe deploy/metrics-server -n kube-system: exit status 1 (26.59625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-916000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-916000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (31.336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-916000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-916000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.175938458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-916000" primary control-plane node in "old-k8s-version-916000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:19.393334   19716 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:19.393463   19716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:19.393466   19716 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:19.393468   19716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:19.393585   19716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:19.394589   19716 out.go:352] Setting JSON to false
	I0819 04:41:19.411014   19716 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9647,"bootTime":1724058032,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:19.411107   19716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:19.416062   19716 out.go:177] * [old-k8s-version-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:19.423013   19716 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:19.423061   19716 notify.go:220] Checking for updates...
	I0819 04:41:19.428003   19716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:19.430999   19716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:19.432293   19716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:19.435006   19716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:19.437993   19716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:19.441284   19716 config.go:182] Loaded profile config "old-k8s-version-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 04:41:19.444930   19716 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 04:41:19.447967   19716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:19.450969   19716 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:41:19.458013   19716 start.go:297] selected driver: qemu2
	I0819 04:41:19.458022   19716 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:19.458089   19716 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:19.460402   19716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:19.460428   19716 cni.go:84] Creating CNI manager for ""
	I0819 04:41:19.460435   19716 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:41:19.460454   19716 start.go:340] cluster config:
	{Name:old-k8s-version-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-916000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:19.463813   19716 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:19.471957   19716 out.go:177] * Starting "old-k8s-version-916000" primary control-plane node in "old-k8s-version-916000" cluster
	I0819 04:41:19.476049   19716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:41:19.476067   19716 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:41:19.476076   19716 cache.go:56] Caching tarball of preloaded images
	I0819 04:41:19.476142   19716 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:41:19.476155   19716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:41:19.476211   19716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/old-k8s-version-916000/config.json ...
	I0819 04:41:19.476563   19716 start.go:360] acquireMachinesLock for old-k8s-version-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:19.476590   19716 start.go:364] duration metric: took 21.083µs to acquireMachinesLock for "old-k8s-version-916000"
	I0819 04:41:19.476602   19716 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:41:19.476610   19716 fix.go:54] fixHost starting: 
	I0819 04:41:19.476753   19716 fix.go:112] recreateIfNeeded on old-k8s-version-916000: state=Stopped err=<nil>
	W0819 04:41:19.476761   19716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:41:19.481036   19716 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-916000" ...
	I0819 04:41:19.488960   19716 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:19.488995   19716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:bd:71:3e:87:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:19.490926   19716 main.go:141] libmachine: STDOUT: 
	I0819 04:41:19.490944   19716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:19.490970   19716 fix.go:56] duration metric: took 14.362ms for fixHost
	I0819 04:41:19.490975   19716 start.go:83] releasing machines lock for "old-k8s-version-916000", held for 14.380583ms
	W0819 04:41:19.490980   19716 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:19.491013   19716 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:19.491017   19716 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:24.493110   19716 start.go:360] acquireMachinesLock for old-k8s-version-916000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:24.493310   19716 start.go:364] duration metric: took 157.084µs to acquireMachinesLock for "old-k8s-version-916000"
	I0819 04:41:24.493366   19716 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:41:24.493371   19716 fix.go:54] fixHost starting: 
	I0819 04:41:24.493537   19716 fix.go:112] recreateIfNeeded on old-k8s-version-916000: state=Stopped err=<nil>
	W0819 04:41:24.493543   19716 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:41:24.502732   19716 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-916000" ...
	I0819 04:41:24.506753   19716 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:24.506811   19716 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:bd:71:3e:87:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/old-k8s-version-916000/disk.qcow2
	I0819 04:41:24.509143   19716 main.go:141] libmachine: STDOUT: 
	I0819 04:41:24.509161   19716 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:24.509182   19716 fix.go:56] duration metric: took 15.811792ms for fixHost
	I0819 04:41:24.509186   19716 start.go:83] releasing machines lock for "old-k8s-version-916000", held for 15.870459ms
	W0819 04:41:24.509220   19716 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:24.517743   19716 out.go:201] 
	W0819 04:41:24.521829   19716 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:24.521845   19716 out.go:270] * 
	* 
	W0819 04:41:24.522395   19716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:24.532623   19716 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-916000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (29.68225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-916000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (30.778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-916000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-916000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-916000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.066291ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-916000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-916000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (29.656125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-916000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (29.910375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-916000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-916000 --alsologtostderr -v=1: exit status 83 (41.245208ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-916000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:24.758311   19735 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:24.759197   19735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:24.759203   19735 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:24.759205   19735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:24.759341   19735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:24.759547   19735 out.go:352] Setting JSON to false
	I0819 04:41:24.759556   19735 mustload.go:65] Loading cluster: old-k8s-version-916000
	I0819 04:41:24.759753   19735 config.go:182] Loaded profile config "old-k8s-version-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 04:41:24.763529   19735 out.go:177] * The control-plane node old-k8s-version-916000 host is not running: state=Stopped
	I0819 04:41:24.767537   19735 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-916000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-916000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (30.703708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (29.283958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-916000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-037000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-037000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.819664167s)

                                                
                                                
-- stdout --
	* [no-preload-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-037000" primary control-plane node in "no-preload-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:25.081632   19752 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:25.081777   19752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:25.081781   19752 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:25.081783   19752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:25.081932   19752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:25.083135   19752 out.go:352] Setting JSON to false
	I0819 04:41:25.099778   19752 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9653,"bootTime":1724058032,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:25.099851   19752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:25.105047   19752 out.go:177] * [no-preload-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:25.111081   19752 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:25.111132   19752 notify.go:220] Checking for updates...
	I0819 04:41:25.119035   19752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:25.121951   19752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:25.125032   19752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:25.128043   19752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:25.130955   19752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:25.134363   19752 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:25.134427   19752 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:41:25.134467   19752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:25.139046   19752 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:41:25.146022   19752 start.go:297] selected driver: qemu2
	I0819 04:41:25.146028   19752 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:41:25.146034   19752 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:25.148200   19752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:41:25.151050   19752 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:41:25.152539   19752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:25.152570   19752 cni.go:84] Creating CNI manager for ""
	I0819 04:41:25.152577   19752 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:41:25.152581   19752 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:41:25.152604   19752 start.go:340] cluster config:
	{Name:no-preload-037000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:25.156199   19752 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.164045   19752 out.go:177] * Starting "no-preload-037000" primary control-plane node in "no-preload-037000" cluster
	I0819 04:41:25.168007   19752 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:41:25.168115   19752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/no-preload-037000/config.json ...
	I0819 04:41:25.168120   19752 cache.go:107] acquiring lock: {Name:mkdb4a901b1d383102161da2a6c0c3197f0db761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168137   19752 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/no-preload-037000/config.json: {Name:mk9f94eb7f4c20fc22e2d75dba803651a3cd030e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:41:25.168123   19752 cache.go:107] acquiring lock: {Name:mk54bc18dc4f807ff5ed44fedc231d7ce04d9a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168148   19752 cache.go:107] acquiring lock: {Name:mkb5c924135aa94dc7ff07f076873664fdb2b39b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168182   19752 cache.go:107] acquiring lock: {Name:mkf2b76847ed5b86f73d3e05ce267589ac340a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168291   19752 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 04:41:25.168298   19752 cache.go:107] acquiring lock: {Name:mk0c99a36fd4f9642726f9774a801afaedc1aac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168193   19752 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 04:41:25.168297   19752 cache.go:107] acquiring lock: {Name:mk8ea3b2d6a2b202c2ce85b651cf780e64c28cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168320   19752 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 198.208µs
	I0819 04:41:25.168328   19752 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 04:41:25.168227   19752 cache.go:107] acquiring lock: {Name:mkac520dd3780237e3a94d26406fc64b90bb1893 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168306   19752 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 04:41:25.168320   19752 cache.go:107] acquiring lock: {Name:mkf5ebeee1ae39e5e8171a44ab9d8eebdfd365eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:25.168450   19752 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 04:41:25.168483   19752 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 04:41:25.168503   19752 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 04:41:25.168581   19752 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 04:41:25.168623   19752 start.go:360] acquireMachinesLock for no-preload-037000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:25.168663   19752 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "no-preload-037000"
	I0819 04:41:25.168683   19752 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 04:41:25.168680   19752 start.go:93] Provisioning new machine with config: &{Name:no-preload-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:25.168712   19752 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:25.176023   19752 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:25.180159   19752 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 04:41:25.180586   19752 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 04:41:25.180625   19752 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 04:41:25.180765   19752 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 04:41:25.180900   19752 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 04:41:25.180908   19752 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 04:41:25.182667   19752 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 04:41:25.192696   19752 start.go:159] libmachine.API.Create for "no-preload-037000" (driver="qemu2")
	I0819 04:41:25.192741   19752 client.go:168] LocalClient.Create starting
	I0819 04:41:25.192838   19752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:25.192874   19752 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:25.192883   19752 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:25.192930   19752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:25.192954   19752 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:25.192962   19752 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:25.193414   19752 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:25.351741   19752 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:25.444764   19752 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:25.444785   19752 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:25.445025   19752 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:25.455242   19752 main.go:141] libmachine: STDOUT: 
	I0819 04:41:25.455263   19752 main.go:141] libmachine: STDERR: 
	I0819 04:41:25.455310   19752 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2 +20000M
	I0819 04:41:25.463892   19752 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:25.463912   19752 main.go:141] libmachine: STDERR: 
	I0819 04:41:25.463931   19752 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:25.463934   19752 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:25.463949   19752 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:25.463977   19752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c7:da:13:ae:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:25.465799   19752 main.go:141] libmachine: STDOUT: 
	I0819 04:41:25.465816   19752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:25.465833   19752 client.go:171] duration metric: took 273.090709ms to LocalClient.Create
	I0819 04:41:25.605402   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0819 04:41:25.608148   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 04:41:25.610925   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 04:41:25.631210   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 04:41:25.642189   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 04:41:25.650869   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 04:41:25.651399   19752 cache.go:162] opening:  /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0819 04:41:25.730650   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 04:41:25.730665   19752 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 562.460083ms
	I0819 04:41:25.730672   19752 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 04:41:27.465999   19752 start.go:128] duration metric: took 2.297302833s to createHost
	I0819 04:41:27.466050   19752 start.go:83] releasing machines lock for "no-preload-037000", held for 2.297412416s
	W0819 04:41:27.466095   19752 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:27.478348   19752 out.go:177] * Deleting "no-preload-037000" in qemu2 ...
	W0819 04:41:27.504729   19752 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:27.504744   19752 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:28.994594   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 04:41:28.994606   19752 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.826435375s
	I0819 04:41:28.994612   19752 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 04:41:29.281863   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 04:41:29.281899   19752 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.113687542s
	I0819 04:41:29.281916   19752 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 04:41:29.324248   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 04:41:29.324278   19752 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.156216875s
	I0819 04:41:29.324290   19752 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 04:41:29.646290   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 04:41:29.646313   19752 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.478269458s
	I0819 04:41:29.646323   19752 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 04:41:29.837992   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 04:41:29.838014   19752 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.669836666s
	I0819 04:41:29.838024   19752 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 04:41:32.505640   19752 start.go:360] acquireMachinesLock for no-preload-037000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:32.506177   19752 start.go:364] duration metric: took 450.791µs to acquireMachinesLock for "no-preload-037000"
	I0819 04:41:32.506308   19752 start.go:93] Provisioning new machine with config: &{Name:no-preload-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:32.506531   19752 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:32.511328   19752 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:32.562045   19752 start.go:159] libmachine.API.Create for "no-preload-037000" (driver="qemu2")
	I0819 04:41:32.562122   19752 client.go:168] LocalClient.Create starting
	I0819 04:41:32.562254   19752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:32.562324   19752 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:32.562346   19752 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:32.562420   19752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:32.562465   19752 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:32.562478   19752 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:32.562999   19752 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:32.723688   19752 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:32.809083   19752 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:32.809091   19752 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:32.809339   19752 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:32.819152   19752 main.go:141] libmachine: STDOUT: 
	I0819 04:41:32.819174   19752 main.go:141] libmachine: STDERR: 
	I0819 04:41:32.819242   19752 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2 +20000M
	I0819 04:41:32.827491   19752 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:32.827507   19752 main.go:141] libmachine: STDERR: 
	I0819 04:41:32.827520   19752 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:32.827523   19752 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:32.827533   19752 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:32.827580   19752 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:31:26:ce:49:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:32.829395   19752 main.go:141] libmachine: STDOUT: 
	I0819 04:41:32.829422   19752 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:32.829438   19752 client.go:171] duration metric: took 267.315875ms to LocalClient.Create
	I0819 04:41:33.613019   19752 cache.go:157] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 04:41:33.613077   19752 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.445020959s
	I0819 04:41:33.613093   19752 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 04:41:33.613125   19752 cache.go:87] Successfully saved all images to host disk.
	I0819 04:41:34.831673   19752 start.go:128] duration metric: took 2.325118208s to createHost
	I0819 04:41:34.831774   19752 start.go:83] releasing machines lock for "no-preload-037000", held for 2.325605s
	W0819 04:41:34.832065   19752 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:34.841654   19752 out.go:201] 
	W0819 04:41:34.846670   19752 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:34.846710   19752 out.go:270] * 
	* 
	W0819 04:41:34.848443   19752 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:34.858611   19752 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-037000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (61.622667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-037000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-037000 create -f testdata/busybox.yaml: exit status 1 (29.572042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-037000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-037000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (30.042166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (30.48425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-037000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-037000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-037000 describe deploy/metrics-server -n kube-system: exit status 1 (26.688834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-037000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-037000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (29.983042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-037000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-037000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.1853795s)

                                                
                                                
-- stdout --
	* [no-preload-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-037000" primary control-plane node in "no-preload-037000" cluster
	* Restarting existing qemu2 VM for "no-preload-037000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-037000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:38.838380   19834 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:38.838543   19834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:38.838547   19834 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:38.838549   19834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:38.838679   19834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:38.839619   19834 out.go:352] Setting JSON to false
	I0819 04:41:38.855788   19834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9666,"bootTime":1724058032,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:38.855856   19834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:38.860434   19834 out.go:177] * [no-preload-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:38.867524   19834 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:38.867577   19834 notify.go:220] Checking for updates...
	I0819 04:41:38.875379   19834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:38.876851   19834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:38.879411   19834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:38.882414   19834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:38.885434   19834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:38.888615   19834 config.go:182] Loaded profile config "no-preload-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:38.888849   19834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:38.892347   19834 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:41:38.897327   19834 start.go:297] selected driver: qemu2
	I0819 04:41:38.897334   19834 start.go:901] validating driver "qemu2" against &{Name:no-preload-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-037000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:38.897395   19834 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:38.899694   19834 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:38.899739   19834 cni.go:84] Creating CNI manager for ""
	I0819 04:41:38.899748   19834 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:41:38.899771   19834 start.go:340] cluster config:
	{Name:no-preload-037000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-037000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:38.903288   19834 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.910404   19834 out.go:177] * Starting "no-preload-037000" primary control-plane node in "no-preload-037000" cluster
	I0819 04:41:38.914360   19834 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:41:38.914426   19834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/no-preload-037000/config.json ...
	I0819 04:41:38.914440   19834 cache.go:107] acquiring lock: {Name:mkdb4a901b1d383102161da2a6c0c3197f0db761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914445   19834 cache.go:107] acquiring lock: {Name:mk54bc18dc4f807ff5ed44fedc231d7ce04d9a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914461   19834 cache.go:107] acquiring lock: {Name:mkb5c924135aa94dc7ff07f076873664fdb2b39b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914493   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 04:41:38.914497   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 04:41:38.914498   19834 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.042µs
	I0819 04:41:38.914503   19834 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 64.25µs
	I0819 04:41:38.914507   19834 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 04:41:38.914505   19834 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 04:41:38.914510   19834 cache.go:107] acquiring lock: {Name:mk8ea3b2d6a2b202c2ce85b651cf780e64c28cd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914517   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 04:41:38.914519   19834 cache.go:107] acquiring lock: {Name:mkac520dd3780237e3a94d26406fc64b90bb1893 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914525   19834 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 82.625µs
	I0819 04:41:38.914541   19834 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 04:41:38.914549   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 04:41:38.914553   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 04:41:38.914552   19834 cache.go:107] acquiring lock: {Name:mkf5ebeee1ae39e5e8171a44ab9d8eebdfd365eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914556   19834 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 38.541µs
	I0819 04:41:38.914560   19834 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 04:41:38.914555   19834 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 46.042µs
	I0819 04:41:38.914565   19834 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 04:41:38.914564   19834 cache.go:107] acquiring lock: {Name:mk0c99a36fd4f9642726f9774a801afaedc1aac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914589   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 04:41:38.914589   19834 cache.go:107] acquiring lock: {Name:mkf2b76847ed5b86f73d3e05ce267589ac340a98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:38.914594   19834 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 43.125µs
	I0819 04:41:38.914598   19834 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 04:41:38.914600   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 04:41:38.914608   19834 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 45.042µs
	I0819 04:41:38.914611   19834 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 04:41:38.914641   19834 cache.go:115] /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 04:41:38.914645   19834 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 71.209µs
	I0819 04:41:38.914650   19834 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 04:41:38.914654   19834 cache.go:87] Successfully saved all images to host disk.
	I0819 04:41:38.914815   19834 start.go:360] acquireMachinesLock for no-preload-037000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:38.914840   19834 start.go:364] duration metric: took 19.833µs to acquireMachinesLock for "no-preload-037000"
	I0819 04:41:38.914849   19834 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:41:38.914853   19834 fix.go:54] fixHost starting: 
	I0819 04:41:38.914964   19834 fix.go:112] recreateIfNeeded on no-preload-037000: state=Stopped err=<nil>
	W0819 04:41:38.914972   19834 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:41:38.923431   19834 out.go:177] * Restarting existing qemu2 VM for "no-preload-037000" ...
	I0819 04:41:38.931452   19834 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:38.931498   19834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:31:26:ce:49:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:38.933385   19834 main.go:141] libmachine: STDOUT: 
	I0819 04:41:38.933399   19834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:38.933424   19834 fix.go:56] duration metric: took 18.571459ms for fixHost
	I0819 04:41:38.933427   19834 start.go:83] releasing machines lock for "no-preload-037000", held for 18.583959ms
	W0819 04:41:38.933433   19834 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:38.933466   19834 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:38.933470   19834 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:43.935608   19834 start.go:360] acquireMachinesLock for no-preload-037000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:43.936137   19834 start.go:364] duration metric: took 424.166µs to acquireMachinesLock for "no-preload-037000"
	I0819 04:41:43.936297   19834 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:41:43.936318   19834 fix.go:54] fixHost starting: 
	I0819 04:41:43.937208   19834 fix.go:112] recreateIfNeeded on no-preload-037000: state=Stopped err=<nil>
	W0819 04:41:43.937236   19834 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:41:43.942009   19834 out.go:177] * Restarting existing qemu2 VM for "no-preload-037000" ...
	I0819 04:41:43.948980   19834 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:43.949252   19834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:31:26:ce:49:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/no-preload-037000/disk.qcow2
	I0819 04:41:43.959134   19834 main.go:141] libmachine: STDOUT: 
	I0819 04:41:43.959196   19834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:43.959262   19834 fix.go:56] duration metric: took 22.948125ms for fixHost
	I0819 04:41:43.959277   19834 start.go:83] releasing machines lock for "no-preload-037000", held for 23.116334ms
	W0819 04:41:43.959453   19834 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-037000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-037000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:43.968041   19834 out.go:201] 
	W0819 04:41:43.971036   19834 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:43.971076   19834 out.go:270] * 
	* 
	W0819 04:41:43.973837   19834 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:43.982041   19834 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-037000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (62.493333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-037000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (31.818958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-037000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-037000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-037000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.247917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-037000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-037000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (30.080167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-037000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (29.081417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-037000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-037000 --alsologtostderr -v=1: exit status 83 (42.427917ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-037000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-037000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:44.247631   19856 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:44.247796   19856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:44.247799   19856 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:44.247801   19856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:44.247938   19856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:44.248176   19856 out.go:352] Setting JSON to false
	I0819 04:41:44.248186   19856 mustload.go:65] Loading cluster: no-preload-037000
	I0819 04:41:44.248372   19856 config.go:182] Loaded profile config "no-preload-037000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:44.253288   19856 out.go:177] * The control-plane node no-preload-037000 host is not running: state=Stopped
	I0819 04:41:44.256103   19856 out.go:177]   To start a cluster, run: "minikube start -p no-preload-037000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-037000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (30.309417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (29.2845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-037000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-718000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-718000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.012283458s)

                                                
                                                
-- stdout --
	* [embed-certs-718000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-718000" primary control-plane node in "embed-certs-718000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-718000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:44.569046   19873 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:44.569181   19873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:44.569184   19873 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:44.569186   19873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:44.569327   19873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:44.570389   19873 out.go:352] Setting JSON to false
	I0819 04:41:44.586990   19873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9672,"bootTime":1724058032,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:44.587064   19873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:44.591625   19873 out.go:177] * [embed-certs-718000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:44.598728   19873 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:44.598789   19873 notify.go:220] Checking for updates...
	I0819 04:41:44.605725   19873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:44.608757   19873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:44.611735   19873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:44.614730   19873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:44.617761   19873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:44.620985   19873 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:44.621041   19873 config.go:182] Loaded profile config "stopped-upgrade-783000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 04:41:44.621086   19873 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:44.625676   19873 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:41:44.632624   19873 start.go:297] selected driver: qemu2
	I0819 04:41:44.632631   19873 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:41:44.632637   19873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:44.634872   19873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:41:44.637735   19873 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:41:44.640848   19873 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:44.640885   19873 cni.go:84] Creating CNI manager for ""
	I0819 04:41:44.640892   19873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:41:44.640899   19873 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:41:44.640934   19873 start.go:340] cluster config:
	{Name:embed-certs-718000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:44.644478   19873 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:44.650715   19873 out.go:177] * Starting "embed-certs-718000" primary control-plane node in "embed-certs-718000" cluster
	I0819 04:41:44.654501   19873 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:41:44.654530   19873 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:41:44.654547   19873 cache.go:56] Caching tarball of preloaded images
	I0819 04:41:44.654630   19873 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:41:44.654636   19873 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:41:44.654702   19873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/embed-certs-718000/config.json ...
	I0819 04:41:44.654713   19873 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/embed-certs-718000/config.json: {Name:mk4afcde6b04f3cdabde61910705b1101741944f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:41:44.654938   19873 start.go:360] acquireMachinesLock for embed-certs-718000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:44.654974   19873 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "embed-certs-718000"
	I0819 04:41:44.654987   19873 start.go:93] Provisioning new machine with config: &{Name:embed-certs-718000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:44.655023   19873 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:44.659681   19873 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:44.675992   19873 start.go:159] libmachine.API.Create for "embed-certs-718000" (driver="qemu2")
	I0819 04:41:44.676027   19873 client.go:168] LocalClient.Create starting
	I0819 04:41:44.676091   19873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:44.676120   19873 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:44.676131   19873 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:44.676175   19873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:44.676208   19873 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:44.676216   19873 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:44.676668   19873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:44.926786   19873 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:45.082427   19873 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:45.082434   19873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:45.082613   19873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:45.091637   19873 main.go:141] libmachine: STDOUT: 
	I0819 04:41:45.091663   19873 main.go:141] libmachine: STDERR: 
	I0819 04:41:45.091716   19873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2 +20000M
	I0819 04:41:45.099675   19873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:45.099692   19873 main.go:141] libmachine: STDERR: 
	I0819 04:41:45.099714   19873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:45.099720   19873 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:45.099732   19873 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:45.099764   19873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:5c:ed:ea:2b:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:45.101367   19873 main.go:141] libmachine: STDOUT: 
	I0819 04:41:45.101383   19873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:45.101400   19873 client.go:171] duration metric: took 425.376292ms to LocalClient.Create
	I0819 04:41:47.103537   19873 start.go:128] duration metric: took 2.448529375s to createHost
	I0819 04:41:47.103589   19873 start.go:83] releasing machines lock for "embed-certs-718000", held for 2.448645166s
	W0819 04:41:47.103661   19873 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:47.120711   19873 out.go:177] * Deleting "embed-certs-718000" in qemu2 ...
	W0819 04:41:47.143950   19873 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:47.143970   19873 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:52.146214   19873 start.go:360] acquireMachinesLock for embed-certs-718000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:52.146690   19873 start.go:364] duration metric: took 381.125µs to acquireMachinesLock for "embed-certs-718000"
	I0819 04:41:52.146852   19873 start.go:93] Provisioning new machine with config: &{Name:embed-certs-718000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:52.147180   19873 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:52.163905   19873 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:52.213265   19873 start.go:159] libmachine.API.Create for "embed-certs-718000" (driver="qemu2")
	I0819 04:41:52.213316   19873 client.go:168] LocalClient.Create starting
	I0819 04:41:52.213416   19873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:52.213485   19873 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:52.213501   19873 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:52.213560   19873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:52.213607   19873 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:52.213622   19873 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:52.214253   19873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:52.373652   19873 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:52.488403   19873 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:52.488415   19873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:52.488641   19873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:52.497956   19873 main.go:141] libmachine: STDOUT: 
	I0819 04:41:52.497974   19873 main.go:141] libmachine: STDERR: 
	I0819 04:41:52.498026   19873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2 +20000M
	I0819 04:41:52.506072   19873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:52.506089   19873 main.go:141] libmachine: STDERR: 
	I0819 04:41:52.506108   19873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:52.506114   19873 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:52.506123   19873 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:52.506150   19873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:cf:17:d1:8b:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:52.507786   19873 main.go:141] libmachine: STDOUT: 
	I0819 04:41:52.507802   19873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:52.507816   19873 client.go:171] duration metric: took 294.500375ms to LocalClient.Create
	I0819 04:41:54.509954   19873 start.go:128] duration metric: took 2.362780958s to createHost
	I0819 04:41:54.510007   19873 start.go:83] releasing machines lock for "embed-certs-718000", held for 2.363326167s
	W0819 04:41:54.510332   19873 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-718000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-718000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:54.527929   19873 out.go:201] 
	W0819 04:41:54.531991   19873 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:54.532039   19873 out.go:270] * 
	* 
	W0819 04:41:54.534639   19873 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:54.542933   19873 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-718000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (51.613792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (11.989775583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-664000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:44.974338   19890 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:44.974463   19890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:44.974470   19890 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:44.974472   19890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:44.974614   19890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:44.975652   19890 out.go:352] Setting JSON to false
	I0819 04:41:44.992128   19890 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9672,"bootTime":1724058032,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:44.992196   19890 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:44.996667   19890 out.go:177] * [default-k8s-diff-port-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:45.006651   19890 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:45.006745   19890 notify.go:220] Checking for updates...
	I0819 04:41:45.013636   19890 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:45.016654   19890 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:45.019711   19890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:45.022726   19890 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:45.025724   19890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:45.028988   19890 config.go:182] Loaded profile config "embed-certs-718000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:45.029043   19890 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:45.029091   19890 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:45.033722   19890 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:41:45.040683   19890 start.go:297] selected driver: qemu2
	I0819 04:41:45.040689   19890 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:41:45.040695   19890 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:45.042781   19890 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:41:45.045696   19890 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:41:45.048669   19890 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:45.048687   19890 cni.go:84] Creating CNI manager for ""
	I0819 04:41:45.048693   19890 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:41:45.048696   19890 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:41:45.048728   19890 start.go:340] cluster config:
	{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:45.052008   19890 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:45.058677   19890 out.go:177] * Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	I0819 04:41:45.062604   19890 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:41:45.062616   19890 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:41:45.062624   19890 cache.go:56] Caching tarball of preloaded images
	I0819 04:41:45.062670   19890 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:41:45.062675   19890 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:41:45.062727   19890 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/default-k8s-diff-port-664000/config.json ...
	I0819 04:41:45.062739   19890 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/default-k8s-diff-port-664000/config.json: {Name:mkcbaafafd8d519c182886bfcaf03f99e5748f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:41:45.062994   19890 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:47.103739   19890 start.go:364] duration metric: took 2.040751541s to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0819 04:41:47.103922   19890 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:47.104121   19890 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:47.113689   19890 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:47.162565   19890 start.go:159] libmachine.API.Create for "default-k8s-diff-port-664000" (driver="qemu2")
	I0819 04:41:47.162611   19890 client.go:168] LocalClient.Create starting
	I0819 04:41:47.162748   19890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:47.162809   19890 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:47.162824   19890 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:47.162895   19890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:47.162940   19890 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:47.162954   19890 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:47.163677   19890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:47.324291   19890 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:47.420918   19890 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:47.420924   19890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:47.421168   19890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:41:47.430531   19890 main.go:141] libmachine: STDOUT: 
	I0819 04:41:47.430554   19890 main.go:141] libmachine: STDERR: 
	I0819 04:41:47.430609   19890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2 +20000M
	I0819 04:41:47.438440   19890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:47.438455   19890 main.go:141] libmachine: STDERR: 
	I0819 04:41:47.438472   19890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:41:47.438478   19890 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:47.438490   19890 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:47.438516   19890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:b5:be:28:10:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:41:47.440143   19890 main.go:141] libmachine: STDOUT: 
	I0819 04:41:47.440160   19890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:47.440179   19890 client.go:171] duration metric: took 277.567041ms to LocalClient.Create
	I0819 04:41:49.442312   19890 start.go:128] duration metric: took 2.338198542s to createHost
	I0819 04:41:49.442369   19890 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 2.338599208s
	W0819 04:41:49.442480   19890 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:49.461531   19890 out.go:177] * Deleting "default-k8s-diff-port-664000" in qemu2 ...
	W0819 04:41:49.495231   19890 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:49.495265   19890 start.go:729] Will try again in 5 seconds ...
	I0819 04:41:54.497426   19890 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:54.510117   19890 start.go:364] duration metric: took 12.564584ms to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0819 04:41:54.510272   19890 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:41:54.510549   19890 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:41:54.520961   19890 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:41:54.574007   19890 start.go:159] libmachine.API.Create for "default-k8s-diff-port-664000" (driver="qemu2")
	I0819 04:41:54.574053   19890 client.go:168] LocalClient.Create starting
	I0819 04:41:54.574169   19890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:41:54.574211   19890 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:54.574235   19890 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:54.574305   19890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:41:54.574336   19890 main.go:141] libmachine: Decoding PEM data...
	I0819 04:41:54.574348   19890 main.go:141] libmachine: Parsing certificate...
	I0819 04:41:54.574839   19890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:41:54.738253   19890 main.go:141] libmachine: Creating SSH key...
	I0819 04:41:54.861439   19890 main.go:141] libmachine: Creating Disk image...
	I0819 04:41:54.861450   19890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:41:54.861668   19890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:41:54.881906   19890 main.go:141] libmachine: STDOUT: 
	I0819 04:41:54.881934   19890 main.go:141] libmachine: STDERR: 
	I0819 04:41:54.881997   19890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2 +20000M
	I0819 04:41:54.890515   19890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:41:54.890533   19890 main.go:141] libmachine: STDERR: 
	I0819 04:41:54.890546   19890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:41:54.890552   19890 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:41:54.890565   19890 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:54.890602   19890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:cb:c8:53:a5:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:41:54.892267   19890 main.go:141] libmachine: STDOUT: 
	I0819 04:41:54.892289   19890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:54.892302   19890 client.go:171] duration metric: took 318.239042ms to LocalClient.Create
	I0819 04:41:56.894575   19890 start.go:128] duration metric: took 2.383970667s to createHost
	I0819 04:41:56.894653   19890 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 2.384547083s
	W0819 04:41:56.895001   19890 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:56.900808   19890 out.go:201] 
	W0819 04:41:56.908758   19890 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:56.908785   19890 out.go:270] * 
	* 
	W0819 04:41:56.911248   19890 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:41:56.920703   19890 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (62.677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-718000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-718000 create -f testdata/busybox.yaml: exit status 1 (32.383792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-718000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-718000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (34.134709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (34.067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-718000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-718000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-718000 describe deploy/metrics-server -n kube-system: exit status 1 (27.675042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-718000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-718000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (30.208583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-664000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-664000 create -f testdata/busybox.yaml: exit status 1 (30.784459ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-664000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-664000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (29.403292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (28.410166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-664000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-664000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-664000 describe deploy/metrics-server -n kube-system: exit status 1 (26.5765ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-664000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-664000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (29.78025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-718000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-718000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.183950875s)

                                                
                                                
-- stdout --
	* [embed-certs-718000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-718000" primary control-plane node in "embed-certs-718000" cluster
	* Restarting existing qemu2 VM for "embed-certs-718000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-718000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:41:57.598538   19963 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:41:57.598657   19963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:57.598660   19963 out.go:358] Setting ErrFile to fd 2...
	I0819 04:41:57.598662   19963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:41:57.598793   19963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:41:57.599811   19963 out.go:352] Setting JSON to false
	I0819 04:41:57.615880   19963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9685,"bootTime":1724058032,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:41:57.615946   19963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:41:57.620841   19963 out.go:177] * [embed-certs-718000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:41:57.627747   19963 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:41:57.627791   19963 notify.go:220] Checking for updates...
	I0819 04:41:57.635728   19963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:41:57.638804   19963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:41:57.641790   19963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:41:57.644808   19963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:41:57.647775   19963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:41:57.651128   19963 config.go:182] Loaded profile config "embed-certs-718000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:41:57.651388   19963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:41:57.655737   19963 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:41:57.662821   19963 start.go:297] selected driver: qemu2
	I0819 04:41:57.662829   19963 start.go:901] validating driver "qemu2" against &{Name:embed-certs-718000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-718000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:57.662882   19963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:41:57.665169   19963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:41:57.665197   19963 cni.go:84] Creating CNI manager for ""
	I0819 04:41:57.665210   19963 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:41:57.665232   19963 start.go:340] cluster config:
	{Name:embed-certs-718000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-718000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:41:57.668701   19963 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:41:57.675805   19963 out.go:177] * Starting "embed-certs-718000" primary control-plane node in "embed-certs-718000" cluster
	I0819 04:41:57.679620   19963 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:41:57.679636   19963 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:41:57.679651   19963 cache.go:56] Caching tarball of preloaded images
	I0819 04:41:57.679708   19963 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:41:57.679714   19963 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:41:57.679778   19963 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/embed-certs-718000/config.json ...
	I0819 04:41:57.680136   19963 start.go:360] acquireMachinesLock for embed-certs-718000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:41:57.680162   19963 start.go:364] duration metric: took 20.666µs to acquireMachinesLock for "embed-certs-718000"
	I0819 04:41:57.680171   19963 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:41:57.680178   19963 fix.go:54] fixHost starting: 
	I0819 04:41:57.680296   19963 fix.go:112] recreateIfNeeded on embed-certs-718000: state=Stopped err=<nil>
	W0819 04:41:57.680303   19963 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:41:57.688733   19963 out.go:177] * Restarting existing qemu2 VM for "embed-certs-718000" ...
	I0819 04:41:57.692676   19963 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:41:57.692709   19963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:cf:17:d1:8b:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:41:57.694652   19963 main.go:141] libmachine: STDOUT: 
	I0819 04:41:57.694683   19963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:41:57.694712   19963 fix.go:56] duration metric: took 14.537ms for fixHost
	I0819 04:41:57.694716   19963 start.go:83] releasing machines lock for "embed-certs-718000", held for 14.550292ms
	W0819 04:41:57.694723   19963 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:41:57.694760   19963 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:41:57.694764   19963 start.go:729] Will try again in 5 seconds ...
	I0819 04:42:02.696497   19963 start.go:360] acquireMachinesLock for embed-certs-718000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:02.696945   19963 start.go:364] duration metric: took 314.5µs to acquireMachinesLock for "embed-certs-718000"
	I0819 04:42:02.697075   19963 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:42:02.697094   19963 fix.go:54] fixHost starting: 
	I0819 04:42:02.697865   19963 fix.go:112] recreateIfNeeded on embed-certs-718000: state=Stopped err=<nil>
	W0819 04:42:02.697892   19963 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:42:02.706457   19963 out.go:177] * Restarting existing qemu2 VM for "embed-certs-718000" ...
	I0819 04:42:02.710517   19963 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:02.710788   19963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:cf:17:d1:8b:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/embed-certs-718000/disk.qcow2
	I0819 04:42:02.719650   19963 main.go:141] libmachine: STDOUT: 
	I0819 04:42:02.719720   19963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:02.719804   19963 fix.go:56] duration metric: took 22.709792ms for fixHost
	I0819 04:42:02.719817   19963 start.go:83] releasing machines lock for "embed-certs-718000", held for 22.851166ms
	W0819 04:42:02.720047   19963 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-718000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-718000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:02.727395   19963 out.go:201] 
	W0819 04:42:02.731318   19963 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:42:02.731367   19963 out.go:270] * 
	* 
	W0819 04:42:02.734349   19963 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:42:02.742356   19963 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-718000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (67.068083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.191260834s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:42:00.856037   19986 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:42:00.856159   19986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:00.856162   19986 out.go:358] Setting ErrFile to fd 2...
	I0819 04:42:00.856164   19986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:00.856302   19986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:42:00.857361   19986 out.go:352] Setting JSON to false
	I0819 04:42:00.873333   19986 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9688,"bootTime":1724058032,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:42:00.873406   19986 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:42:00.877947   19986 out.go:177] * [default-k8s-diff-port-664000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:42:00.887032   19986 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:42:00.887066   19986 notify.go:220] Checking for updates...
	I0819 04:42:00.894985   19986 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:42:00.898044   19986 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:42:00.900948   19986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:42:00.903997   19986 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:42:00.907012   19986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:42:00.910232   19986 config.go:182] Loaded profile config "default-k8s-diff-port-664000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:00.910497   19986 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:42:00.914924   19986 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:42:00.922061   19986 start.go:297] selected driver: qemu2
	I0819 04:42:00.922073   19986 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-664000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:42:00.922143   19986 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:42:00.924514   19986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 04:42:00.924545   19986 cni.go:84] Creating CNI manager for ""
	I0819 04:42:00.924553   19986 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:42:00.924572   19986 start.go:340] cluster config:
	{Name:default-k8s-diff-port-664000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-664000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:42:00.928155   19986 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:42:00.935867   19986 out.go:177] * Starting "default-k8s-diff-port-664000" primary control-plane node in "default-k8s-diff-port-664000" cluster
	I0819 04:42:00.940031   19986 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:42:00.940046   19986 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:42:00.940058   19986 cache.go:56] Caching tarball of preloaded images
	I0819 04:42:00.940120   19986 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:42:00.940128   19986 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:42:00.940202   19986 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/default-k8s-diff-port-664000/config.json ...
	I0819 04:42:00.940572   19986 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:00.940598   19986 start.go:364] duration metric: took 21.208µs to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0819 04:42:00.940608   19986 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:42:00.940613   19986 fix.go:54] fixHost starting: 
	I0819 04:42:00.940737   19986 fix.go:112] recreateIfNeeded on default-k8s-diff-port-664000: state=Stopped err=<nil>
	W0819 04:42:00.940744   19986 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:42:00.944967   19986 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	I0819 04:42:00.952965   19986 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:00.952997   19986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:cb:c8:53:a5:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:42:00.955039   19986 main.go:141] libmachine: STDOUT: 
	I0819 04:42:00.955061   19986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:00.955090   19986 fix.go:56] duration metric: took 14.47775ms for fixHost
	I0819 04:42:00.955094   19986 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 14.492125ms
	W0819 04:42:00.955101   19986 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:42:00.955131   19986 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:00.955136   19986 start.go:729] Will try again in 5 seconds ...
	I0819 04:42:05.957191   19986 start.go:360] acquireMachinesLock for default-k8s-diff-port-664000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:05.957662   19986 start.go:364] duration metric: took 325.625µs to acquireMachinesLock for "default-k8s-diff-port-664000"
	I0819 04:42:05.957782   19986 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:42:05.957805   19986 fix.go:54] fixHost starting: 
	I0819 04:42:05.958596   19986 fix.go:112] recreateIfNeeded on default-k8s-diff-port-664000: state=Stopped err=<nil>
	W0819 04:42:05.958623   19986 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:42:05.964110   19986 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-664000" ...
	I0819 04:42:05.975132   19986 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:05.975318   19986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:cb:c8:53:a5:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/default-k8s-diff-port-664000/disk.qcow2
	I0819 04:42:05.984550   19986 main.go:141] libmachine: STDOUT: 
	I0819 04:42:05.984611   19986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:05.984725   19986 fix.go:56] duration metric: took 26.925375ms for fixHost
	I0819 04:42:05.984746   19986 start.go:83] releasing machines lock for "default-k8s-diff-port-664000", held for 27.056584ms
	W0819 04:42:05.984957   19986 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-664000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:05.992993   19986 out.go:201] 
	W0819 04:42:05.997009   19986 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:42:05.997028   19986 out.go:270] * 
	* 
	W0819 04:42:05.999487   19986 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:42:06.006967   19986 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-664000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (62.623042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-718000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (32.035542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-718000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-718000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-718000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.173458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-718000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-718000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (29.309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-718000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (29.496833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-718000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-718000 --alsologtostderr -v=1: exit status 83 (41.03575ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-718000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-718000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:42:03.008440   20005 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:42:03.008592   20005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:03.008595   20005 out.go:358] Setting ErrFile to fd 2...
	I0819 04:42:03.008598   20005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:03.008732   20005 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:42:03.008967   20005 out.go:352] Setting JSON to false
	I0819 04:42:03.008976   20005 mustload.go:65] Loading cluster: embed-certs-718000
	I0819 04:42:03.009182   20005 config.go:182] Loaded profile config "embed-certs-718000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:03.013596   20005 out.go:177] * The control-plane node embed-certs-718000 host is not running: state=Stopped
	I0819 04:42:03.017615   20005 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-718000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-718000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (29.612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (29.332417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-718000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-501000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-501000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.783547125s)

                                                
                                                
-- stdout --
	* [newest-cni-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-501000" primary control-plane node in "newest-cni-501000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-501000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:42:03.325510   20022 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:42:03.325631   20022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:03.325634   20022 out.go:358] Setting ErrFile to fd 2...
	I0819 04:42:03.325637   20022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:03.325794   20022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:42:03.326864   20022 out.go:352] Setting JSON to false
	I0819 04:42:03.343007   20022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9691,"bootTime":1724058032,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:42:03.343072   20022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:42:03.347624   20022 out.go:177] * [newest-cni-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:42:03.354678   20022 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:42:03.354710   20022 notify.go:220] Checking for updates...
	I0819 04:42:03.358530   20022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:42:03.361609   20022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:42:03.364584   20022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:42:03.367552   20022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:42:03.370583   20022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:42:03.373978   20022 config.go:182] Loaded profile config "default-k8s-diff-port-664000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:03.374039   20022 config.go:182] Loaded profile config "multinode-746000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:03.374091   20022 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:42:03.378565   20022 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 04:42:03.385574   20022 start.go:297] selected driver: qemu2
	I0819 04:42:03.385579   20022 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:42:03.385585   20022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:42:03.387882   20022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0819 04:42:03.387903   20022 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0819 04:42:03.392564   20022 out.go:177] * Automatically selected the socket_vmnet network
	I0819 04:42:03.399729   20022 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 04:42:03.399769   20022 cni.go:84] Creating CNI manager for ""
	I0819 04:42:03.399776   20022 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:42:03.399784   20022 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:42:03.399818   20022 start.go:340] cluster config:
	{Name:newest-cni-501000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:42:03.403637   20022 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:42:03.410541   20022 out.go:177] * Starting "newest-cni-501000" primary control-plane node in "newest-cni-501000" cluster
	I0819 04:42:03.413534   20022 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:42:03.413550   20022 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:42:03.413561   20022 cache.go:56] Caching tarball of preloaded images
	I0819 04:42:03.413623   20022 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:42:03.413629   20022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:42:03.413716   20022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/newest-cni-501000/config.json ...
	I0819 04:42:03.413731   20022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/newest-cni-501000/config.json: {Name:mkc5fb5eb2bfb0ba1cd244f86e6d2fafdd0291df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:42:03.414081   20022 start.go:360] acquireMachinesLock for newest-cni-501000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:03.414114   20022 start.go:364] duration metric: took 27.666µs to acquireMachinesLock for "newest-cni-501000"
	I0819 04:42:03.414128   20022 start.go:93] Provisioning new machine with config: &{Name:newest-cni-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:42:03.414161   20022 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:42:03.419584   20022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:42:03.436499   20022 start.go:159] libmachine.API.Create for "newest-cni-501000" (driver="qemu2")
	I0819 04:42:03.436524   20022 client.go:168] LocalClient.Create starting
	I0819 04:42:03.436589   20022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:42:03.436618   20022 main.go:141] libmachine: Decoding PEM data...
	I0819 04:42:03.436628   20022 main.go:141] libmachine: Parsing certificate...
	I0819 04:42:03.436664   20022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:42:03.436687   20022 main.go:141] libmachine: Decoding PEM data...
	I0819 04:42:03.436692   20022 main.go:141] libmachine: Parsing certificate...
	I0819 04:42:03.437114   20022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:42:03.612145   20022 main.go:141] libmachine: Creating SSH key...
	I0819 04:42:03.648033   20022 main.go:141] libmachine: Creating Disk image...
	I0819 04:42:03.648039   20022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:42:03.648263   20022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:03.657602   20022 main.go:141] libmachine: STDOUT: 
	I0819 04:42:03.657619   20022 main.go:141] libmachine: STDERR: 
	I0819 04:42:03.657662   20022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2 +20000M
	I0819 04:42:03.665491   20022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:42:03.665506   20022 main.go:141] libmachine: STDERR: 
	I0819 04:42:03.665517   20022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:03.665523   20022 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:42:03.665535   20022 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:03.665562   20022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:c0:62:4c:59:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:03.667163   20022 main.go:141] libmachine: STDOUT: 
	I0819 04:42:03.667179   20022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:03.667196   20022 client.go:171] duration metric: took 230.671375ms to LocalClient.Create
	I0819 04:42:05.669330   20022 start.go:128] duration metric: took 2.25518425s to createHost
	I0819 04:42:05.669402   20022 start.go:83] releasing machines lock for "newest-cni-501000", held for 2.255316667s
	W0819 04:42:05.669497   20022 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:05.676759   20022 out.go:177] * Deleting "newest-cni-501000" in qemu2 ...
	W0819 04:42:05.713063   20022 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:05.713081   20022 start.go:729] Will try again in 5 seconds ...
	I0819 04:42:10.715216   20022 start.go:360] acquireMachinesLock for newest-cni-501000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:10.715689   20022 start.go:364] duration metric: took 397.958µs to acquireMachinesLock for "newest-cni-501000"
	I0819 04:42:10.715824   20022 start.go:93] Provisioning new machine with config: &{Name:newest-cni-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 04:42:10.716123   20022 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 04:42:10.720946   20022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 04:42:10.773323   20022 start.go:159] libmachine.API.Create for "newest-cni-501000" (driver="qemu2")
	I0819 04:42:10.773407   20022 client.go:168] LocalClient.Create starting
	I0819 04:42:10.773519   20022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/ca.pem
	I0819 04:42:10.773593   20022 main.go:141] libmachine: Decoding PEM data...
	I0819 04:42:10.773613   20022 main.go:141] libmachine: Parsing certificate...
	I0819 04:42:10.773671   20022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19479-15750/.minikube/certs/cert.pem
	I0819 04:42:10.773714   20022 main.go:141] libmachine: Decoding PEM data...
	I0819 04:42:10.773728   20022 main.go:141] libmachine: Parsing certificate...
	I0819 04:42:10.774519   20022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 04:42:10.933426   20022 main.go:141] libmachine: Creating SSH key...
	I0819 04:42:11.015606   20022 main.go:141] libmachine: Creating Disk image...
	I0819 04:42:11.015612   20022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 04:42:11.015831   20022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2.raw /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:11.024988   20022 main.go:141] libmachine: STDOUT: 
	I0819 04:42:11.025010   20022 main.go:141] libmachine: STDERR: 
	I0819 04:42:11.025051   20022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2 +20000M
	I0819 04:42:11.032909   20022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 04:42:11.032924   20022 main.go:141] libmachine: STDERR: 
	I0819 04:42:11.032933   20022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:11.032939   20022 main.go:141] libmachine: Starting QEMU VM...
	I0819 04:42:11.032948   20022 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:11.032991   20022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:9e:d9:c0:75:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:11.034583   20022 main.go:141] libmachine: STDOUT: 
	I0819 04:42:11.034600   20022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:11.034613   20022 client.go:171] duration metric: took 261.204417ms to LocalClient.Create
	I0819 04:42:13.036753   20022 start.go:128] duration metric: took 2.320642209s to createHost
	I0819 04:42:13.036808   20022 start.go:83] releasing machines lock for "newest-cni-501000", held for 2.321131834s
	W0819 04:42:13.037165   20022 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-501000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-501000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:13.050736   20022 out.go:201] 
	W0819 04:42:13.053867   20022 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:42:13.053930   20022 out.go:270] * 
	* 
	W0819 04:42:13.056560   20022 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:42:13.067763   20022 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-501000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000: exit status 7 (65.973041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-664000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (31.424958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-664000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-664000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-664000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.925ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-664000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-664000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (29.818167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-664000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (28.905667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-664000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-664000 --alsologtostderr -v=1: exit status 83 (41.503166ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-664000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-664000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:42:06.268735   20044 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:42:06.268907   20044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:06.268910   20044 out.go:358] Setting ErrFile to fd 2...
	I0819 04:42:06.268913   20044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:06.269057   20044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:42:06.269280   20044 out.go:352] Setting JSON to false
	I0819 04:42:06.269289   20044 mustload.go:65] Loading cluster: default-k8s-diff-port-664000
	I0819 04:42:06.269501   20044 config.go:182] Loaded profile config "default-k8s-diff-port-664000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:06.274060   20044 out.go:177] * The control-plane node default-k8s-diff-port-664000 host is not running: state=Stopped
	I0819 04:42:06.278066   20044 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-664000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-664000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (29.610125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (29.098459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-664000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-501000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-501000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.186745s)

                                                
                                                
-- stdout --
	* [newest-cni-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-501000" primary control-plane node in "newest-cni-501000" cluster
	* Restarting existing qemu2 VM for "newest-cni-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-501000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:42:16.799809   20091 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:42:16.799940   20091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:16.799943   20091 out.go:358] Setting ErrFile to fd 2...
	I0819 04:42:16.799945   20091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:16.800079   20091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:42:16.801042   20091 out.go:352] Setting JSON to false
	I0819 04:42:16.817151   20091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9704,"bootTime":1724058032,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:42:16.817222   20091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:42:16.822705   20091 out.go:177] * [newest-cni-501000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:42:16.829728   20091 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:42:16.829795   20091 notify.go:220] Checking for updates...
	I0819 04:42:16.837692   20091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:42:16.840635   20091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:42:16.843665   20091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:42:16.846616   20091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:42:16.849700   20091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:42:16.853004   20091 config.go:182] Loaded profile config "newest-cni-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:16.853278   20091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:42:16.856657   20091 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:42:16.863702   20091 start.go:297] selected driver: qemu2
	I0819 04:42:16.863709   20091 start.go:901] validating driver "qemu2" against &{Name:newest-cni-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-501000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:42:16.863785   20091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:42:16.866082   20091 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 04:42:16.866124   20091 cni.go:84] Creating CNI manager for ""
	I0819 04:42:16.866131   20091 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:42:16.866156   20091 start.go:340] cluster config:
	{Name:newest-cni-501000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-501000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:42:16.869583   20091 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:42:16.878676   20091 out.go:177] * Starting "newest-cni-501000" primary control-plane node in "newest-cni-501000" cluster
	I0819 04:42:16.883590   20091 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:42:16.883607   20091 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:42:16.883620   20091 cache.go:56] Caching tarball of preloaded images
	I0819 04:42:16.883694   20091 preload.go:172] Found /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 04:42:16.883700   20091 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 04:42:16.883778   20091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/newest-cni-501000/config.json ...
	I0819 04:42:16.884264   20091 start.go:360] acquireMachinesLock for newest-cni-501000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:16.884294   20091 start.go:364] duration metric: took 23.208µs to acquireMachinesLock for "newest-cni-501000"
	I0819 04:42:16.884308   20091 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:42:16.884314   20091 fix.go:54] fixHost starting: 
	I0819 04:42:16.884433   20091 fix.go:112] recreateIfNeeded on newest-cni-501000: state=Stopped err=<nil>
	W0819 04:42:16.884443   20091 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:42:16.888710   20091 out.go:177] * Restarting existing qemu2 VM for "newest-cni-501000" ...
	I0819 04:42:16.895592   20091 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:16.895631   20091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:9e:d9:c0:75:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:16.897574   20091 main.go:141] libmachine: STDOUT: 
	I0819 04:42:16.897594   20091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:16.897623   20091 fix.go:56] duration metric: took 13.310916ms for fixHost
	I0819 04:42:16.897627   20091 start.go:83] releasing machines lock for "newest-cni-501000", held for 13.328875ms
	W0819 04:42:16.897633   20091 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:42:16.897681   20091 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:16.897686   20091 start.go:729] Will try again in 5 seconds ...
	I0819 04:42:21.899850   20091 start.go:360] acquireMachinesLock for newest-cni-501000: {Name:mkd71805c0324a991576936a19be749d2702d472 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 04:42:21.900265   20091 start.go:364] duration metric: took 317.625µs to acquireMachinesLock for "newest-cni-501000"
	I0819 04:42:21.900389   20091 start.go:96] Skipping create...Using existing machine configuration
	I0819 04:42:21.900409   20091 fix.go:54] fixHost starting: 
	I0819 04:42:21.901078   20091 fix.go:112] recreateIfNeeded on newest-cni-501000: state=Stopped err=<nil>
	W0819 04:42:21.901101   20091 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 04:42:21.909534   20091 out.go:177] * Restarting existing qemu2 VM for "newest-cni-501000" ...
	I0819 04:42:21.913566   20091 qemu.go:418] Using hvf for hardware acceleration
	I0819 04:42:21.913783   20091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:9e:d9:c0:75:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19479-15750/.minikube/machines/newest-cni-501000/disk.qcow2
	I0819 04:42:21.922568   20091 main.go:141] libmachine: STDOUT: 
	I0819 04:42:21.922638   20091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 04:42:21.922697   20091 fix.go:56] duration metric: took 22.293958ms for fixHost
	I0819 04:42:21.922708   20091 start.go:83] releasing machines lock for "newest-cni-501000", held for 22.425792ms
	W0819 04:42:21.922941   20091 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-501000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 04:42:21.930469   20091 out.go:201] 
	W0819 04:42:21.933656   20091 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 04:42:21.933680   20091 out.go:270] * 
	* 
	W0819 04:42:21.936020   20091 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:42:21.943525   20091 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-501000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000: exit status 7 (67.698209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-501000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000: exit status 7 (29.684583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-501000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-501000 --alsologtostderr -v=1: exit status 83 (40.701125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-501000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-501000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:42:22.129598   20105 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:42:22.129739   20105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:22.129742   20105 out.go:358] Setting ErrFile to fd 2...
	I0819 04:42:22.129744   20105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:42:22.129875   20105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:42:22.130100   20105 out.go:352] Setting JSON to false
	I0819 04:42:22.130109   20105 mustload.go:65] Loading cluster: newest-cni-501000
	I0819 04:42:22.130304   20105 config.go:182] Loaded profile config "newest-cni-501000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:42:22.134293   20105 out.go:177] * The control-plane node newest-cni-501000 host is not running: state=Stopped
	I0819 04:42:22.138303   20105 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-501000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-501000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000: exit status 7 (29.956958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-501000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000: exit status 7 (29.744666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-501000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.0/json-events 8.21
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.44
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.24
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.4
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.72
55 TestFunctional/serial/CacheCmd/cache/add_local 1.03
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.09
93 TestFunctional/parallel/License 0.24
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.68
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.08
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.21
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.3
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.38
258 TestNoKubernetes/serial/Stop 2.76
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.64
275 TestStartStop/group/old-k8s-version/serial/Stop 3.76
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
286 TestStartStop/group/no-preload/serial/Stop 3.54
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 2.62
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.5
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.44
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-648000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-648000: exit status 85 (92.084708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |          |
	|         | -p download-only-648000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 04:15:51
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 04:15:51.804289   16242 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:15:51.804436   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:51.804439   16242 out.go:358] Setting ErrFile to fd 2...
	I0819 04:15:51.804441   16242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:15:51.804586   16242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	W0819 04:15:51.804674   16242 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19479-15750/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19479-15750/.minikube/config/config.json: no such file or directory
	I0819 04:15:51.805960   16242 out.go:352] Setting JSON to true
	I0819 04:15:51.822245   16242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8119,"bootTime":1724058032,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:15:51.822315   16242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:15:51.828641   16242 out.go:97] [download-only-648000] minikube v1.33.1 on Darwin 14.5 (arm64)
	W0819 04:15:51.828807   16242 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 04:15:51.828818   16242 notify.go:220] Checking for updates...
	I0819 04:15:51.832531   16242 out.go:169] MINIKUBE_LOCATION=19479
	I0819 04:15:51.835537   16242 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:15:51.838657   16242 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:15:51.841589   16242 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:15:51.845543   16242 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	W0819 04:15:51.851549   16242 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 04:15:51.851796   16242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:15:51.854501   16242 out.go:97] Using the qemu2 driver based on user configuration
	I0819 04:15:51.854524   16242 start.go:297] selected driver: qemu2
	I0819 04:15:51.854540   16242 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:15:51.854626   16242 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:15:51.857540   16242 out.go:169] Automatically selected the socket_vmnet network
	I0819 04:15:51.863715   16242 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 04:15:51.863809   16242 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:15:51.863896   16242 cni.go:84] Creating CNI manager for ""
	I0819 04:15:51.863912   16242 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 04:15:51.863958   16242 start.go:340] cluster config:
	{Name:download-only-648000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-648000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:15:51.867726   16242 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:15:51.870528   16242 out.go:97] Downloading VM boot image ...
	I0819 04:15:51.870564   16242 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 04:15:56.559796   16242 out.go:97] Starting "download-only-648000" primary control-plane node in "download-only-648000" cluster
	I0819 04:15:56.559814   16242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:15:56.621551   16242 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:15:56.621571   16242 cache.go:56] Caching tarball of preloaded images
	I0819 04:15:56.621740   16242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:15:56.626847   16242 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 04:15:56.626854   16242 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:15:56.722295   16242 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 04:16:02.443757   16242 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:16:02.444141   16242 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:16:03.139481   16242 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 04:16:03.139661   16242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/download-only-648000/config.json ...
	I0819 04:16:03.139677   16242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19479-15750/.minikube/profiles/download-only-648000/config.json: {Name:mkee9fb3453e616fe0a206e2298a15c750642a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 04:16:03.139903   16242 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 04:16:03.140104   16242 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 04:16:03.462041   16242 out.go:193] 
	W0819 04:16:03.467264   16242 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19479-15750/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960 0x1091b7960] Decompressors:map[bz2:0x14000894ff0 gz:0x14000894ff8 tar:0x14000894fa0 tar.bz2:0x14000894fb0 tar.gz:0x14000894fc0 tar.xz:0x14000894fd0 tar.zst:0x14000894fe0 tbz2:0x14000894fb0 tgz:0x14000894fc0 txz:0x14000894fd0 tzst:0x14000894fe0 xz:0x14000895000 zip:0x14000895010 zst:0x14000895008] Getters:map[file:0x1400070fcc0 http:0x14000620320 https:0x14000620370] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 04:16:03.467285   16242 out_reason.go:110] 
	W0819 04:16:03.474157   16242 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 04:16:03.477148   16242 out.go:193] 
	
	
	* The control-plane node download-only-648000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-648000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-648000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (8.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-956000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-956000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (8.213612583s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (8.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-956000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-956000: exit status 85 (81.435916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:15 PDT |                     |
	|         | -p download-only-648000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| delete  | -p download-only-648000        | download-only-648000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT | 19 Aug 24 04:16 PDT |
	| start   | -o=json --download-only        | download-only-956000 | jenkins | v1.33.1 | 19 Aug 24 04:16 PDT |                     |
	|         | -p download-only-956000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 04:16:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 04:16:03.897759   16266 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:16:03.897915   16266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:16:03.897918   16266 out.go:358] Setting ErrFile to fd 2...
	I0819 04:16:03.897920   16266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:16:03.898027   16266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:16:03.899089   16266 out.go:352] Setting JSON to true
	I0819 04:16:03.915138   16266 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8131,"bootTime":1724058032,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:16:03.915229   16266 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:16:03.920102   16266 out.go:97] [download-only-956000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:16:03.920199   16266 notify.go:220] Checking for updates...
	I0819 04:16:03.923138   16266 out.go:169] MINIKUBE_LOCATION=19479
	I0819 04:16:03.926073   16266 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:16:03.929136   16266 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:16:03.932143   16266 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:16:03.935115   16266 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	W0819 04:16:03.941115   16266 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 04:16:03.941319   16266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:16:03.942751   16266 out.go:97] Using the qemu2 driver based on user configuration
	I0819 04:16:03.942761   16266 start.go:297] selected driver: qemu2
	I0819 04:16:03.942765   16266 start.go:901] validating driver "qemu2" against <nil>
	I0819 04:16:03.942824   16266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 04:16:03.946107   16266 out.go:169] Automatically selected the socket_vmnet network
	I0819 04:16:03.952147   16266 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 04:16:03.952282   16266 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 04:16:03.952304   16266 cni.go:84] Creating CNI manager for ""
	I0819 04:16:03.952313   16266 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 04:16:03.952336   16266 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 04:16:03.952383   16266 start.go:340] cluster config:
	{Name:download-only-956000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-956000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:16:03.955731   16266 iso.go:125] acquiring lock: {Name:mk82d926f4bb778c7e93ba3bc4244459b219e238 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 04:16:03.956947   16266 out.go:97] Starting "download-only-956000" primary control-plane node in "download-only-956000" cluster
	I0819 04:16:03.956952   16266 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:16:04.019162   16266 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:16:04.019174   16266 cache.go:56] Caching tarball of preloaded images
	I0819 04:16:04.019331   16266 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 04:16:04.022687   16266 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 04:16:04.022695   16266 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:16:04.109547   16266 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 04:16:08.433196   16266 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 04:16:08.433363   16266 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19479-15750/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-956000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-956000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-956000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.44s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-577000 --alsologtostderr --binary-mirror http://127.0.0.1:52958 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-577000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-577000
--- PASS: TestBinaryMirror (0.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-939000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-939000: exit status 85 (60.391292ms)

                                                
                                                
-- stdout --
	* Profile "addons-939000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-939000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-939000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-939000: exit status 85 (64.300291ms)

                                                
                                                
-- stdout --
	* Profile "addons-939000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-939000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.24s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.24s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status: exit status 7 (31.327167ms)

                                                
                                                
-- stdout --
	nospam-373000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status: exit status 7 (30.552541ms)

                                                
                                                
-- stdout --
	nospam-373000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status: exit status 7 (30.580667ms)

                                                
                                                
-- stdout --
	nospam-373000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause: exit status 83 (39.693167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-373000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause: exit status 83 (39.39125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-373000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause: exit status 83 (40.284292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-373000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause: exit status 83 (39.68725ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-373000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause: exit status 83 (40.906625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-373000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause: exit status 83 (40.877208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-373000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop: (2.981615291s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop: (3.313063708s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-373000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-373000 stop: (3.1060995s)
--- PASS: TestErrorSpam/stop (9.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19479-15750/.minikube/files/etc/test/nested/copy/16240/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3684369574/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cache add minikube-local-cache-test:functional-916000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 cache delete minikube-local-cache-test:functional-916000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-916000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 config get cpus: exit status 14 (30.519ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 config get cpus: exit status 14 (36.292542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-916000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-916000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (161.8035ms)

                                                
                                                
-- stdout --
	* [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:17:49.276231   16892 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:49.276402   16892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.276407   16892 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:49.276410   16892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.276588   16892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:17:49.277977   16892 out.go:352] Setting JSON to false
	I0819 04:17:49.297835   16892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8237,"bootTime":1724058032,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:17:49.297903   16892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:17:49.303007   16892 out.go:177] * [functional-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 04:17:49.310866   16892 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:17:49.310894   16892 notify.go:220] Checking for updates...
	I0819 04:17:49.317729   16892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:17:49.321912   16892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:17:49.324913   16892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:17:49.326218   16892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:17:49.328956   16892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:17:49.332219   16892 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:17:49.332522   16892 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:17:49.336778   16892 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 04:17:49.343865   16892 start.go:297] selected driver: qemu2
	I0819 04:17:49.343872   16892 start.go:901] validating driver "qemu2" against &{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:17:49.343923   16892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:17:49.350836   16892 out.go:201] 
	W0819 04:17:49.354887   16892 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 04:17:49.358892   16892 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-916000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-916000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-916000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.040625ms)

                                                
                                                
-- stdout --
	* [functional-916000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 04:17:49.507516   16903 out.go:345] Setting OutFile to fd 1 ...
	I0819 04:17:49.507641   16903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.507645   16903 out.go:358] Setting ErrFile to fd 2...
	I0819 04:17:49.507647   16903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 04:17:49.507790   16903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19479-15750/.minikube/bin
	I0819 04:17:49.509226   16903 out.go:352] Setting JSON to false
	I0819 04:17:49.525864   16903 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8237,"bootTime":1724058032,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 04:17:49.525947   16903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 04:17:49.530970   16903 out.go:177] * [functional-916000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0819 04:17:49.537929   16903 out.go:177]   - MINIKUBE_LOCATION=19479
	I0819 04:17:49.538010   16903 notify.go:220] Checking for updates...
	I0819 04:17:49.545848   16903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	I0819 04:17:49.548903   16903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 04:17:49.551905   16903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 04:17:49.554928   16903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	I0819 04:17:49.557875   16903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 04:17:49.561256   16903 config.go:182] Loaded profile config "functional-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 04:17:49.561548   16903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 04:17:49.565838   16903 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0819 04:17:49.572937   16903 start.go:297] selected driver: qemu2
	I0819 04:17:49.572950   16903 start.go:901] validating driver "qemu2" against &{Name:functional-916000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-916000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 04:17:49.573026   16903 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 04:17:49.579938   16903 out.go:201] 
	W0819 04:17:49.583886   16903 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 04:17:49.587821   16903 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.651610458s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-916000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image rm kicbase/echo-server:functional-916000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-916000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 image save --daemon kicbase/echo-server:functional-916000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-916000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "49.753084ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "32.722834ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "47.0815ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.794333ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012880375s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-916000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-916000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-916000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-916000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-842000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-842000 --output=json --user=testUser: (3.211878625s)
--- PASS: TestJSONOutput/stop/Command (3.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-480000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-480000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.105041ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6955ab2c-dd1b-4e50-976e-316ecd4bf204","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-480000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"606131a8-048a-4b99-9a7b-08e087691de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19479"}}
	{"specversion":"1.0","id":"86750361-8548-4bde-8f1f-fa6496773402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig"}}
	{"specversion":"1.0","id":"d7c47f64-63b5-47e5-a2e9-9d2345e9cd6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"43cc556e-7520-400f-ae79-7fda181213fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2b3a80cd-5923-4db7-b012-77c8541b85d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube"}}
	{"specversion":"1.0","id":"d2ac7175-74f9-439c-b881-acb549448426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c23ed1aa-b4dc-4dc8-bc5c-2d1f3bfb66c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-480000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-480000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-227000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (108.737834ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19479
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19479-15750/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19479-15750/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-227000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-227000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.646541ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-227000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-227000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.656762125s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.727777834s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-227000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-227000: (2.763049375s)
--- PASS: TestNoKubernetes/serial/Stop (2.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-227000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-227000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.652ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-227000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-227000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-783000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-916000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-916000 --alsologtostderr -v=3: (3.759263625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-916000 -n old-k8s-version-916000: exit status 7 (56.278709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-916000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-037000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-037000 --alsologtostderr -v=3: (3.544637833s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-037000 -n no-preload-037000: exit status 7 (54.132375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-037000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-718000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-718000 --alsologtostderr -v=3: (2.61548725s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-664000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-664000 --alsologtostderr -v=3: (3.503765208s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-718000 -n embed-certs-718000: exit status 7 (51.734458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-718000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-664000 -n default-k8s-diff-port-664000: exit status 7 (55.860875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-664000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-501000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-501000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-501000 --alsologtostderr -v=3: (3.435702s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-501000 -n newest-cni-501000: exit status 7 (57.789667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-501000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3121332809/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724066234529156000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3121332809/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724066234529156000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3121332809/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724066234529156000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3121332809/001/test-1724066234529156000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.786541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.281125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.644292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.568125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.268541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.938416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.262583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.152875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo umount -f /mount-9p": exit status 83 (47.767333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3121332809/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (14.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1669804043/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.765375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.798459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.22925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.521125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.475333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.093959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.744417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "sudo umount -f /mount-9p": exit status 83 (45.746791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-916000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1669804043/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052219815/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052219815/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052219815/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (89.713834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (82.486125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (84.038833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (85.178875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (86.403959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (86.518375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-916000 ssh "findmnt -T" /mount1: exit status 83 (88.601542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-916000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-916000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052219815/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052219815/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-916000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052219815/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.83s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-714000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-714000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-714000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-714000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-714000"

                                                
                                                
----------------------- debugLogs end: cilium-714000 [took: 2.271165917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-714000
--- SKIP: TestNetworkPlugins/group/cilium (2.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-970000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-970000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard