Test Report: QEMU_macOS 19423

                    
                      74b5ac7e1cfb7233a98e35daf2ce49e3acb00be2:2024-08-19:35861
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.68
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.09
27 TestAddons/Setup 10.91
28 TestCertOptions 10.2
29 TestCertExpiration 195.35
30 TestDockerFlags 10.24
31 TestForceSystemdFlag 10.29
32 TestForceSystemdEnv 11.56
38 TestErrorSpam/setup 9.95
47 TestFunctional/serial/StartWithProxy 9.89
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.78
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.05
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.17
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.11
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
94 TestFunctional/parallel/DockerEnv/bash 0.05
95 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
96 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
97 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
99 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
102 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
103 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 101.54
104 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
105 TestFunctional/parallel/ServiceCmd/List 0.04
106 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
107 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
108 TestFunctional/parallel/ServiceCmd/Format 0.04
109 TestFunctional/parallel/ServiceCmd/URL 0.04
117 TestFunctional/parallel/Version/components 0.04
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
122 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 35.85
141 TestMultiControlPlane/serial/StartCluster 9.95
142 TestMultiControlPlane/serial/DeployApp 66.36
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 52.59
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.93
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.52
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.86
165 TestJSONOutput/start/Command 9.82
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.19
197 TestMountStart/serial/StartWithMountFirst 10.14
200 TestMultiNode/serial/FreshStart2Nodes 9.95
201 TestMultiNode/serial/DeployApp2Nodes 119.12
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 56.95
209 TestMultiNode/serial/RestartKeepsNodes 7.37
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 3.03
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 21.83
217 TestPreload 10.41
219 TestScheduledStopUnix 10.22
220 TestSkaffold 13.4
223 TestRunningBinaryUpgrade 613.08
225 TestKubernetesUpgrade 17.68
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.95
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.67
241 TestStoppedBinaryUpgrade/Upgrade 575.5
243 TestPause/serial/Start 10
253 TestNoKubernetes/serial/StartWithK8s 9.85
254 TestNoKubernetes/serial/StartWithStopK8s 5.31
255 TestNoKubernetes/serial/Start 5.31
259 TestNoKubernetes/serial/StartNoArgs 5.32
261 TestNetworkPlugins/group/auto/Start 9.85
262 TestNetworkPlugins/group/custom-flannel/Start 9.82
263 TestNetworkPlugins/group/false/Start 9.79
264 TestNetworkPlugins/group/calico/Start 9.77
265 TestNetworkPlugins/group/kindnet/Start 9.86
266 TestNetworkPlugins/group/flannel/Start 9.74
267 TestNetworkPlugins/group/enable-default-cni/Start 9.7
268 TestNetworkPlugins/group/bridge/Start 10.05
270 TestNetworkPlugins/group/kubenet/Start 9.93
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
279 TestStartStop/group/no-preload/serial/FirstStart 10.11
280 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
281 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
282 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
283 TestStartStop/group/old-k8s-version/serial/Pause 0.1
285 TestStartStop/group/embed-certs/serial/FirstStart 10.1
286 TestStartStop/group/no-preload/serial/DeployApp 0.09
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
290 TestStartStop/group/no-preload/serial/SecondStart 6.56
291 TestStartStop/group/embed-certs/serial/DeployApp 0.09
292 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/embed-certs/serial/SecondStart 5.27
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/no-preload/serial/Pause 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.83
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/embed-certs/serial/Pause 0.1
307 TestStartStop/group/newest-cni/serial/FirstStart 9.92
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
317 TestStartStop/group/newest-cni/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (12.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-927000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-927000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.68245225s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23fead06-bfff-4413-a0b5-a3b33e318456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-927000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7eb63ae9-a8d4-4921-bc18-1ef7b54e4b81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"4cd73a6a-1481-4574-b2a3-058f270b5cd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig"}}
	{"specversion":"1.0","id":"7667ebde-072b-4043-94cf-dc96dd48d103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"aa875a4f-ae2d-493e-b51b-19732e098f23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"093d954b-d436-498f-95fc-82a41e778f03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube"}}
	{"specversion":"1.0","id":"01e52a4d-3468-437b-b63b-c2fa2713d1fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"57d0b5bd-ebd2-4435-b824-8ef8abb2a1f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"de0b60ec-8698-4c94-b9d5-9c7bd69f2824","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7acda5e2-c5a6-46f0-b836-6a3c20843105","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"28edd23a-7aff-4006-aa00-92a49acbc670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-927000\" primary control-plane node in \"download-only-927000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e444119-9262-4d52-8943-956e7bb63e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"25b3d94f-1828-4240-9b66-5aa7b5373c73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0] Decompressors:map[bz2:0x140003e1920 gz:0x140003e1928 tar:0x140003e18d0 tar.bz2:0x140003e18e0 tar.gz:0x140003e18f0 tar.xz:0x140003e1900 tar.zst:0x140003e1910 tbz2:0x140003e18e0 tgz:0x1
40003e18f0 txz:0x140003e1900 tzst:0x140003e1910 xz:0x140003e1930 zip:0x140003e1940 zst:0x140003e1938] Getters:map[file:0x1400176a610 http:0x140001722d0 https:0x14000172320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4bdc928f-166e-446f-bb4a-92ba8edc59c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:31:15.901494   17656 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:15.901657   17656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:15.901661   17656 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:15.901663   17656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:15.901792   17656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	W0819 11:31:15.901892   17656 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-17178/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-17178/.minikube/config/config.json: no such file or directory
	I0819 11:31:15.903242   17656 out.go:352] Setting JSON to true
	I0819 11:31:15.921312   17656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7242,"bootTime":1724085033,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:31:15.921386   17656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:31:15.926530   17656 out.go:97] [download-only-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:31:15.926651   17656 notify.go:220] Checking for updates...
	W0819 11:31:15.926704   17656 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:31:15.930937   17656 out.go:169] MINIKUBE_LOCATION=19423
	I0819 11:31:15.942542   17656 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:31:15.946504   17656 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:31:15.950504   17656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:15.953563   17656 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	W0819 11:31:15.959487   17656 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:31:15.959680   17656 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:31:15.963518   17656 out.go:97] Using the qemu2 driver based on user configuration
	I0819 11:31:15.963537   17656 start.go:297] selected driver: qemu2
	I0819 11:31:15.963551   17656 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:31:15.963623   17656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:15.966520   17656 out.go:169] Automatically selected the socket_vmnet network
	I0819 11:31:15.972768   17656 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 11:31:15.972882   17656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:31:15.972953   17656 cni.go:84] Creating CNI manager for ""
	I0819 11:31:15.972972   17656 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:31:15.973029   17656 start.go:340] cluster config:
	{Name:download-only-927000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-927000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:15.977217   17656 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:31:15.982040   17656 out.go:97] Downloading VM boot image ...
	I0819 11:31:15.982067   17656 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 11:31:21.440537   17656 out.go:97] Starting "download-only-927000" primary control-plane node in "download-only-927000" cluster
	I0819 11:31:21.440557   17656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:31:21.502017   17656 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:31:21.502037   17656 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:21.502445   17656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:31:21.507279   17656 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:31:21.507286   17656 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:21.595304   17656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:31:27.305607   17656 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:27.305771   17656 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:28.016452   17656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:31:28.016651   17656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/download-only-927000/config.json ...
	I0819 11:31:28.016667   17656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/download-only-927000/config.json: {Name:mk1b90f843dc74d3542d212ada55937598e4262b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:28.017094   17656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:31:28.017282   17656 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 11:31:28.504505   17656 out.go:193] 
	W0819 11:31:28.510764   17656 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0] Decompressors:map[bz2:0x140003e1920 gz:0x140003e1928 tar:0x140003e18d0 tar.bz2:0x140003e18e0 tar.gz:0x140003e18f0 tar.xz:0x140003e1900 tar.zst:0x140003e1910 tbz2:0x140003e18e0 tgz:0x140003e18f0 txz:0x140003e1900 tzst:0x140003e1910 xz:0x140003e1930 zip:0x140003e1940 zst:0x140003e1938] Getters:map[file:0x1400176a610 http:0x140001722d0 https:0x14000172320] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 11:31:28.510807   17656 out_reason.go:110] 
	W0819 11:31:28.521708   17656 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:31:28.526505   17656 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-927000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-875000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-875000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.940616584s)

                                                
                                                
-- stdout --
	* [offline-docker-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-875000" primary control-plane node in "offline-docker-875000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-875000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:42:44.290142   19130 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:42:44.290292   19130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:42:44.290295   19130 out.go:358] Setting ErrFile to fd 2...
	I0819 11:42:44.290298   19130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:42:44.290423   19130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:42:44.291801   19130 out.go:352] Setting JSON to false
	I0819 11:42:44.309923   19130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7931,"bootTime":1724085033,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:42:44.310010   19130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:42:44.313831   19130 out.go:177] * [offline-docker-875000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:42:44.321842   19130 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:42:44.321844   19130 notify.go:220] Checking for updates...
	I0819 11:42:44.328681   19130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:42:44.331789   19130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:42:44.334780   19130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:42:44.337849   19130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:42:44.340764   19130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:42:44.344115   19130 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:42:44.344184   19130 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:42:44.348745   19130 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:42:44.355808   19130 start.go:297] selected driver: qemu2
	I0819 11:42:44.355818   19130 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:42:44.355825   19130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:42:44.357752   19130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:42:44.360692   19130 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:42:44.363816   19130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:42:44.363833   19130 cni.go:84] Creating CNI manager for ""
	I0819 11:42:44.363841   19130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:42:44.363847   19130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:42:44.363877   19130 start.go:340] cluster config:
	{Name:offline-docker-875000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:42:44.367473   19130 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:44.372726   19130 out.go:177] * Starting "offline-docker-875000" primary control-plane node in "offline-docker-875000" cluster
	I0819 11:42:44.376758   19130 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:42:44.376787   19130 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:42:44.376794   19130 cache.go:56] Caching tarball of preloaded images
	I0819 11:42:44.376869   19130 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:42:44.376875   19130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:42:44.376940   19130 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/offline-docker-875000/config.json ...
	I0819 11:42:44.376951   19130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/offline-docker-875000/config.json: {Name:mk225eeef5d333fa2fadff079f5f1b632bef7ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:42:44.377248   19130 start.go:360] acquireMachinesLock for offline-docker-875000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:42:44.377282   19130 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "offline-docker-875000"
	I0819 11:42:44.377293   19130 start.go:93] Provisioning new machine with config: &{Name:offline-docker-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:42:44.377321   19130 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:42:44.381718   19130 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:42:44.397742   19130 start.go:159] libmachine.API.Create for "offline-docker-875000" (driver="qemu2")
	I0819 11:42:44.397793   19130 client.go:168] LocalClient.Create starting
	I0819 11:42:44.397871   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:42:44.397901   19130 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:44.397909   19130 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:44.397956   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:42:44.397979   19130 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:44.397993   19130 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:44.398367   19130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:42:44.559082   19130 main.go:141] libmachine: Creating SSH key...
	I0819 11:42:44.700477   19130 main.go:141] libmachine: Creating Disk image...
	I0819 11:42:44.700486   19130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:42:44.700706   19130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2
	I0819 11:42:44.718096   19130 main.go:141] libmachine: STDOUT: 
	I0819 11:42:44.718122   19130 main.go:141] libmachine: STDERR: 
	I0819 11:42:44.718192   19130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2 +20000M
	I0819 11:42:44.727023   19130 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:42:44.727044   19130 main.go:141] libmachine: STDERR: 
	I0819 11:42:44.727064   19130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2
	I0819 11:42:44.727068   19130 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:42:44.727083   19130 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:42:44.727133   19130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:ce:ee:33:31:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2
	I0819 11:42:44.729026   19130 main.go:141] libmachine: STDOUT: 
	I0819 11:42:44.729046   19130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:42:44.729072   19130 client.go:171] duration metric: took 331.275166ms to LocalClient.Create
	I0819 11:42:46.731262   19130 start.go:128] duration metric: took 2.353934417s to createHost
	I0819 11:42:46.731292   19130 start.go:83] releasing machines lock for "offline-docker-875000", held for 2.354016416s
	W0819 11:42:46.731309   19130 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:46.740166   19130 out.go:177] * Deleting "offline-docker-875000" in qemu2 ...
	W0819 11:42:46.753037   19130 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:46.753050   19130 start.go:729] Will try again in 5 seconds ...
	I0819 11:42:51.755113   19130 start.go:360] acquireMachinesLock for offline-docker-875000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:42:51.755244   19130 start.go:364] duration metric: took 101.25µs to acquireMachinesLock for "offline-docker-875000"
	I0819 11:42:51.755277   19130 start.go:93] Provisioning new machine with config: &{Name:offline-docker-875000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-875000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:42:51.755337   19130 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:42:51.771794   19130 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:42:51.787872   19130 start.go:159] libmachine.API.Create for "offline-docker-875000" (driver="qemu2")
	I0819 11:42:51.787900   19130 client.go:168] LocalClient.Create starting
	I0819 11:42:51.787968   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:42:51.788010   19130 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:51.788019   19130 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:51.788053   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:42:51.788075   19130 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:51.788080   19130 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:51.788369   19130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:42:51.945334   19130 main.go:141] libmachine: Creating SSH key...
	I0819 11:42:52.134377   19130 main.go:141] libmachine: Creating Disk image...
	I0819 11:42:52.134387   19130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:42:52.134609   19130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2
	I0819 11:42:52.144968   19130 main.go:141] libmachine: STDOUT: 
	I0819 11:42:52.144994   19130 main.go:141] libmachine: STDERR: 
	I0819 11:42:52.145079   19130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2 +20000M
	I0819 11:42:52.153954   19130 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:42:52.153973   19130 main.go:141] libmachine: STDERR: 
	I0819 11:42:52.153986   19130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2
	I0819 11:42:52.153993   19130 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:42:52.154014   19130 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:42:52.154234   19130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:36:d0:a6:98:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/offline-docker-875000/disk.qcow2
	I0819 11:42:52.156412   19130 main.go:141] libmachine: STDOUT: 
	I0819 11:42:52.156428   19130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:42:52.156439   19130 client.go:171] duration metric: took 368.537333ms to LocalClient.Create
	I0819 11:42:54.158638   19130 start.go:128] duration metric: took 2.403280375s to createHost
	I0819 11:42:54.158742   19130 start.go:83] releasing machines lock for "offline-docker-875000", held for 2.403498292s
	W0819 11:42:54.159072   19130 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:54.169702   19130 out.go:201] 
	W0819 11:42:54.173751   19130 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:42:54.173910   19130 out.go:270] * 
	* 
	W0819 11:42:54.176976   19130 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:42:54.185714   19130 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-875000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-19 11:42:54.201575 -0700 PDT m=+698.386826876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-875000 -n offline-docker-875000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-875000 -n offline-docker-875000: exit status 7 (67.541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-875000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-875000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-875000
--- FAIL: TestOffline (10.09s)

                                                
                                    
x
+
TestAddons/Setup (10.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-698000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-698000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.908653583s)

                                                
                                                
-- stdout --
	* [addons-698000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-698000" primary control-plane node in "addons-698000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-698000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:31:37.261227   17771 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:37.261347   17771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:37.261350   17771 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:37.261353   17771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:37.261469   17771 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:31:37.262718   17771 out.go:352] Setting JSON to false
	I0819 11:31:37.280132   17771 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7264,"bootTime":1724085033,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:31:37.280208   17771 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:31:37.286439   17771 out.go:177] * [addons-698000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:31:37.294962   17771 notify.go:220] Checking for updates...
	I0819 11:31:37.302055   17771 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:31:37.311778   17771 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:31:37.319501   17771 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:31:37.327420   17771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:37.336385   17771 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:31:37.343434   17771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:31:37.346611   17771 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:31:37.351151   17771 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:31:37.359800   17771 start.go:297] selected driver: qemu2
	I0819 11:31:37.359807   17771 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:31:37.359813   17771 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:31:37.362572   17771 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:37.367874   17771 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:31:37.372290   17771 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:31:37.372312   17771 cni.go:84] Creating CNI manager for ""
	I0819 11:31:37.372321   17771 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:31:37.372331   17771 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:31:37.372378   17771 start.go:340] cluster config:
	{Name:addons-698000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:37.376673   17771 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:31:37.383459   17771 out.go:177] * Starting "addons-698000" primary control-plane node in "addons-698000" cluster
	I0819 11:31:37.389244   17771 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:31:37.389269   17771 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:31:37.389282   17771 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:37.389384   17771 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:31:37.389392   17771 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:31:37.389675   17771 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/addons-698000/config.json ...
	I0819 11:31:37.389689   17771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/addons-698000/config.json: {Name:mk44856b67fdb5bc622762dbf50aa26eaaa9ab72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:37.389995   17771 start.go:360] acquireMachinesLock for addons-698000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:31:37.390083   17771 start.go:364] duration metric: took 80.667µs to acquireMachinesLock for "addons-698000"
	I0819 11:31:37.390101   17771 start.go:93] Provisioning new machine with config: &{Name:addons-698000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:31:37.390141   17771 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:31:37.399774   17771 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 11:31:37.423621   17771 start.go:159] libmachine.API.Create for "addons-698000" (driver="qemu2")
	I0819 11:31:37.423657   17771 client.go:168] LocalClient.Create starting
	I0819 11:31:37.423818   17771 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:31:37.770303   17771 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:31:37.819637   17771 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:31:38.517917   17771 main.go:141] libmachine: Creating SSH key...
	I0819 11:31:38.582038   17771 main.go:141] libmachine: Creating Disk image...
	I0819 11:31:38.582045   17771 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:31:38.582687   17771 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2
	I0819 11:31:38.592176   17771 main.go:141] libmachine: STDOUT: 
	I0819 11:31:38.592197   17771 main.go:141] libmachine: STDERR: 
	I0819 11:31:38.592242   17771 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2 +20000M
	I0819 11:31:38.600170   17771 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:31:38.600183   17771 main.go:141] libmachine: STDERR: 
	I0819 11:31:38.600195   17771 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2
	I0819 11:31:38.600199   17771 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:31:38.600237   17771 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:31:38.600265   17771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:05:d0:bc:bc:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2
	I0819 11:31:38.601801   17771 main.go:141] libmachine: STDOUT: 
	I0819 11:31:38.601818   17771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:31:38.601836   17771 client.go:171] duration metric: took 1.178179792s to LocalClient.Create
	I0819 11:31:40.604000   17771 start.go:128] duration metric: took 3.21385125s to createHost
	I0819 11:31:40.604054   17771 start.go:83] releasing machines lock for "addons-698000", held for 3.213970875s
	W0819 11:31:40.604111   17771 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:31:40.644528   17771 out.go:177] * Deleting "addons-698000" in qemu2 ...
	W0819 11:31:40.692045   17771 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:31:40.692089   17771 start.go:729] Will try again in 5 seconds ...
	I0819 11:31:45.694282   17771 start.go:360] acquireMachinesLock for addons-698000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:31:45.694726   17771 start.go:364] duration metric: took 359.375µs to acquireMachinesLock for "addons-698000"
	I0819 11:31:45.694859   17771 start.go:93] Provisioning new machine with config: &{Name:addons-698000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-698000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:31:45.695201   17771 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:31:45.731440   17771 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0819 11:31:45.779661   17771 start.go:159] libmachine.API.Create for "addons-698000" (driver="qemu2")
	I0819 11:31:45.779706   17771 client.go:168] LocalClient.Create starting
	I0819 11:31:45.779861   17771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:31:45.779927   17771 main.go:141] libmachine: Decoding PEM data...
	I0819 11:31:45.779958   17771 main.go:141] libmachine: Parsing certificate...
	I0819 11:31:45.780054   17771 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:31:45.780102   17771 main.go:141] libmachine: Decoding PEM data...
	I0819 11:31:45.780122   17771 main.go:141] libmachine: Parsing certificate...
	I0819 11:31:45.780631   17771 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:31:45.981219   17771 main.go:141] libmachine: Creating SSH key...
	I0819 11:31:46.062519   17771 main.go:141] libmachine: Creating Disk image...
	I0819 11:31:46.062524   17771 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:31:46.062750   17771 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2
	I0819 11:31:46.071894   17771 main.go:141] libmachine: STDOUT: 
	I0819 11:31:46.071919   17771 main.go:141] libmachine: STDERR: 
	I0819 11:31:46.071977   17771 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2 +20000M
	I0819 11:31:46.079961   17771 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:31:46.079983   17771 main.go:141] libmachine: STDERR: 
	I0819 11:31:46.079997   17771 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2
	I0819 11:31:46.080001   17771 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:31:46.080012   17771 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:31:46.080042   17771 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:eb:ac:c2:a3:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/addons-698000/disk.qcow2
	I0819 11:31:46.081613   17771 main.go:141] libmachine: STDOUT: 
	I0819 11:31:46.081628   17771 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:31:46.081642   17771 client.go:171] duration metric: took 301.92975ms to LocalClient.Create
	I0819 11:31:48.083753   17771 start.go:128] duration metric: took 2.388513959s to createHost
	I0819 11:31:48.083879   17771 start.go:83] releasing machines lock for "addons-698000", held for 2.389079292s
	W0819 11:31:48.084193   17771 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-698000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-698000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:31:48.112330   17771 out.go:201] 
	W0819 11:31:48.118511   17771 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:31:48.118583   17771 out.go:270] * 
	* 
	W0819 11:31:48.120281   17771 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:31:48.127439   17771 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-698000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.91s)

                                                
                                    
x
+
TestCertOptions (10.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-587000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-587000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.936724708s)

                                                
                                                
-- stdout --
	* [cert-options-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-587000" primary control-plane node in "cert-options-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-587000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-587000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-587000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.254083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-587000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-587000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-587000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-587000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-587000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.047084ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-587000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-587000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-587000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-19 11:43:26.245272 -0700 PDT m=+730.430675835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-587000 -n cert-options-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-587000 -n cert-options-587000: exit status 7 (30.681167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-587000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-587000
--- FAIL: TestCertOptions (10.20s)

                                                
                                    
x
+
TestCertExpiration (195.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-386000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-386000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.973365458s)

                                                
                                                
-- stdout --
	* [cert-expiration-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-386000" primary control-plane node in "cert-expiration-386000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-386000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-386000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-386000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-386000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.234433333s)

                                                
                                                
-- stdout --
	* [cert-expiration-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-386000" primary control-plane node in "cert-expiration-386000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-386000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-386000" primary control-plane node in "cert-expiration-386000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-19 11:46:26.29074 -0700 PDT m=+910.476999751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-386000 -n cert-expiration-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-386000 -n cert-expiration-386000: exit status 7 (63.762583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-386000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-386000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-386000
--- FAIL: TestCertExpiration (195.35s)

                                                
                                    
x
+
TestDockerFlags (10.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-446000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-446000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.016690959s)

                                                
                                                
-- stdout --
	* [docker-flags-446000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-446000" primary control-plane node in "docker-flags-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:43:05.938677   19322 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:43:05.938794   19322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:43:05.938798   19322 out.go:358] Setting ErrFile to fd 2...
	I0819 11:43:05.938807   19322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:43:05.938934   19322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:43:05.939980   19322 out.go:352] Setting JSON to false
	I0819 11:43:05.956172   19322 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7952,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:43:05.956242   19322 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:43:05.961409   19322 out.go:177] * [docker-flags-446000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:43:05.968205   19322 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:43:05.968265   19322 notify.go:220] Checking for updates...
	I0819 11:43:05.976377   19322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:43:05.977777   19322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:43:05.980400   19322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:43:05.983378   19322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:43:05.986348   19322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:43:05.989731   19322 config.go:182] Loaded profile config "force-systemd-flag-995000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:43:05.989803   19322 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:43:05.989860   19322 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:43:05.994308   19322 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:43:06.001282   19322 start.go:297] selected driver: qemu2
	I0819 11:43:06.001290   19322 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:43:06.001297   19322 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:43:06.003477   19322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:43:06.006445   19322 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:43:06.009494   19322 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0819 11:43:06.009554   19322 cni.go:84] Creating CNI manager for ""
	I0819 11:43:06.009562   19322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:43:06.009565   19322 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:43:06.009605   19322 start.go:340] cluster config:
	{Name:docker-flags-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:43:06.013200   19322 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:43:06.021336   19322 out.go:177] * Starting "docker-flags-446000" primary control-plane node in "docker-flags-446000" cluster
	I0819 11:43:06.024351   19322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:43:06.024372   19322 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:43:06.024385   19322 cache.go:56] Caching tarball of preloaded images
	I0819 11:43:06.024461   19322 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:43:06.024467   19322 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:43:06.024557   19322 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/docker-flags-446000/config.json ...
	I0819 11:43:06.024569   19322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/docker-flags-446000/config.json: {Name:mkba2cde34d8bedfef8546741375080b188fe72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:43:06.024857   19322 start.go:360] acquireMachinesLock for docker-flags-446000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:43:06.024900   19322 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "docker-flags-446000"
	I0819 11:43:06.024912   19322 start.go:93] Provisioning new machine with config: &{Name:docker-flags-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:43:06.024940   19322 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:43:06.032235   19322 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:43:06.050131   19322 start.go:159] libmachine.API.Create for "docker-flags-446000" (driver="qemu2")
	I0819 11:43:06.050152   19322 client.go:168] LocalClient.Create starting
	I0819 11:43:06.050202   19322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:43:06.050231   19322 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:06.050240   19322 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:06.050274   19322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:43:06.050304   19322 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:06.050311   19322 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:06.050659   19322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:43:06.209440   19322 main.go:141] libmachine: Creating SSH key...
	I0819 11:43:06.377354   19322 main.go:141] libmachine: Creating Disk image...
	I0819 11:43:06.377365   19322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:43:06.377569   19322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2
	I0819 11:43:06.387084   19322 main.go:141] libmachine: STDOUT: 
	I0819 11:43:06.387105   19322 main.go:141] libmachine: STDERR: 
	I0819 11:43:06.387147   19322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2 +20000M
	I0819 11:43:06.395008   19322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:43:06.395023   19322 main.go:141] libmachine: STDERR: 
	I0819 11:43:06.395033   19322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2
	I0819 11:43:06.395039   19322 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:43:06.395054   19322 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:43:06.395088   19322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:7c:da:56:19:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2
	I0819 11:43:06.396668   19322 main.go:141] libmachine: STDOUT: 
	I0819 11:43:06.396693   19322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:43:06.396710   19322 client.go:171] duration metric: took 346.556666ms to LocalClient.Create
	I0819 11:43:08.398874   19322 start.go:128] duration metric: took 2.373928541s to createHost
	I0819 11:43:08.398928   19322 start.go:83] releasing machines lock for "docker-flags-446000", held for 2.374029083s
	W0819 11:43:08.399015   19322 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:08.417128   19322 out.go:177] * Deleting "docker-flags-446000" in qemu2 ...
	W0819 11:43:08.439535   19322 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:08.439553   19322 start.go:729] Will try again in 5 seconds ...
	I0819 11:43:13.441751   19322 start.go:360] acquireMachinesLock for docker-flags-446000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:43:13.523208   19322 start.go:364] duration metric: took 81.32025ms to acquireMachinesLock for "docker-flags-446000"
	I0819 11:43:13.523350   19322 start.go:93] Provisioning new machine with config: &{Name:docker-flags-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:43:13.523665   19322 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:43:13.532426   19322 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:43:13.580786   19322 start.go:159] libmachine.API.Create for "docker-flags-446000" (driver="qemu2")
	I0819 11:43:13.580840   19322 client.go:168] LocalClient.Create starting
	I0819 11:43:13.580976   19322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:43:13.581034   19322 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:13.581049   19322 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:13.581137   19322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:43:13.581183   19322 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:13.581198   19322 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:13.581851   19322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:43:13.776940   19322 main.go:141] libmachine: Creating SSH key...
	I0819 11:43:13.859420   19322 main.go:141] libmachine: Creating Disk image...
	I0819 11:43:13.859425   19322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:43:13.859587   19322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2
	I0819 11:43:13.868762   19322 main.go:141] libmachine: STDOUT: 
	I0819 11:43:13.868781   19322 main.go:141] libmachine: STDERR: 
	I0819 11:43:13.868835   19322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2 +20000M
	I0819 11:43:13.876569   19322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:43:13.876584   19322 main.go:141] libmachine: STDERR: 
	I0819 11:43:13.876593   19322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2
	I0819 11:43:13.876598   19322 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:43:13.876609   19322 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:43:13.876641   19322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:51:16:f9:80:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/docker-flags-446000/disk.qcow2
	I0819 11:43:13.878214   19322 main.go:141] libmachine: STDOUT: 
	I0819 11:43:13.878242   19322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:43:13.878255   19322 client.go:171] duration metric: took 297.411042ms to LocalClient.Create
	I0819 11:43:15.880466   19322 start.go:128] duration metric: took 2.356758667s to createHost
	I0819 11:43:15.880510   19322 start.go:83] releasing machines lock for "docker-flags-446000", held for 2.357284584s
	W0819 11:43:15.880906   19322 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:15.895525   19322 out.go:201] 
	W0819 11:43:15.906687   19322 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:43:15.906726   19322 out.go:270] * 
	* 
	W0819 11:43:15.909278   19322 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:43:15.915490   19322 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-446000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-446000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-446000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (72.297083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-446000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-446000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-446000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-446000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-446000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-446000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-446000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-446000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.719584ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-446000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-446000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-446000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-446000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-446000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-19 11:43:16.049573 -0700 PDT m=+720.234928626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-446000 -n docker-flags-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-446000 -n docker-flags-446000: exit status 7 (28.560875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-446000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-446000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-446000
--- FAIL: TestDockerFlags (10.24s)

                                                
                                    
x
+
TestForceSystemdFlag (10.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-995000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-995000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.0970425s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-995000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-995000" primary control-plane node in "force-systemd-flag-995000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-995000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:43:00.818264   19301 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:43:00.818382   19301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:43:00.818385   19301 out.go:358] Setting ErrFile to fd 2...
	I0819 11:43:00.818388   19301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:43:00.818513   19301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:43:00.819588   19301 out.go:352] Setting JSON to false
	I0819 11:43:00.835781   19301 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7947,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:43:00.835861   19301 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:43:00.842363   19301 out.go:177] * [force-systemd-flag-995000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:43:00.850524   19301 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:43:00.850569   19301 notify.go:220] Checking for updates...
	I0819 11:43:00.859426   19301 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:43:00.863522   19301 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:43:00.866537   19301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:43:00.869534   19301 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:43:00.876530   19301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:43:00.880802   19301 config.go:182] Loaded profile config "force-systemd-env-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:43:00.880881   19301 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:43:00.880925   19301 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:43:00.885453   19301 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:43:00.893421   19301 start.go:297] selected driver: qemu2
	I0819 11:43:00.893428   19301 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:43:00.893434   19301 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:43:00.895766   19301 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:43:00.898499   19301 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:43:00.901644   19301 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:43:00.901659   19301 cni.go:84] Creating CNI manager for ""
	I0819 11:43:00.901668   19301 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:43:00.901673   19301 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:43:00.901709   19301 start.go:340] cluster config:
	{Name:force-systemd-flag-995000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:43:00.905732   19301 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:43:00.913457   19301 out.go:177] * Starting "force-systemd-flag-995000" primary control-plane node in "force-systemd-flag-995000" cluster
	I0819 11:43:00.917356   19301 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:43:00.917370   19301 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:43:00.917378   19301 cache.go:56] Caching tarball of preloaded images
	I0819 11:43:00.917437   19301 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:43:00.917443   19301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:43:00.917509   19301 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/force-systemd-flag-995000/config.json ...
	I0819 11:43:00.917520   19301 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/force-systemd-flag-995000/config.json: {Name:mk5ec35c50b08d0db4023cef589f1d42a738fda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:43:00.917886   19301 start.go:360] acquireMachinesLock for force-systemd-flag-995000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:43:00.917926   19301 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "force-systemd-flag-995000"
	I0819 11:43:00.917940   19301 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-995000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:43:00.917968   19301 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:43:00.925298   19301 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:43:00.944365   19301 start.go:159] libmachine.API.Create for "force-systemd-flag-995000" (driver="qemu2")
	I0819 11:43:00.944392   19301 client.go:168] LocalClient.Create starting
	I0819 11:43:00.944466   19301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:43:00.944503   19301 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:00.944514   19301 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:00.944554   19301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:43:00.944579   19301 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:00.944589   19301 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:00.945075   19301 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:43:01.095797   19301 main.go:141] libmachine: Creating SSH key...
	I0819 11:43:01.184386   19301 main.go:141] libmachine: Creating Disk image...
	I0819 11:43:01.184393   19301 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:43:01.184565   19301 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2
	I0819 11:43:01.193749   19301 main.go:141] libmachine: STDOUT: 
	I0819 11:43:01.193771   19301 main.go:141] libmachine: STDERR: 
	I0819 11:43:01.193819   19301 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2 +20000M
	I0819 11:43:01.201804   19301 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:43:01.201822   19301 main.go:141] libmachine: STDERR: 
	I0819 11:43:01.201844   19301 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2
	I0819 11:43:01.201848   19301 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:43:01.201861   19301 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:43:01.201891   19301 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:8b:0e:01:d0:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2
	I0819 11:43:01.203466   19301 main.go:141] libmachine: STDOUT: 
	I0819 11:43:01.203482   19301 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:43:01.203501   19301 client.go:171] duration metric: took 259.105958ms to LocalClient.Create
	I0819 11:43:03.205731   19301 start.go:128] duration metric: took 2.287753125s to createHost
	I0819 11:43:03.205795   19301 start.go:83] releasing machines lock for "force-systemd-flag-995000", held for 2.287869958s
	W0819 11:43:03.205851   19301 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:03.231231   19301 out.go:177] * Deleting "force-systemd-flag-995000" in qemu2 ...
	W0819 11:43:03.254288   19301 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:03.254311   19301 start.go:729] Will try again in 5 seconds ...
	I0819 11:43:08.256487   19301 start.go:360] acquireMachinesLock for force-systemd-flag-995000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:43:08.399070   19301 start.go:364] duration metric: took 142.419833ms to acquireMachinesLock for "force-systemd-flag-995000"
	I0819 11:43:08.399226   19301 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-995000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-995000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:43:08.399497   19301 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:43:08.406214   19301 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:43:08.455152   19301 start.go:159] libmachine.API.Create for "force-systemd-flag-995000" (driver="qemu2")
	I0819 11:43:08.455195   19301 client.go:168] LocalClient.Create starting
	I0819 11:43:08.455308   19301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:43:08.455396   19301 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:08.455414   19301 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:08.455473   19301 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:43:08.455516   19301 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:08.455529   19301 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:08.456177   19301 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:43:08.710599   19301 main.go:141] libmachine: Creating SSH key...
	I0819 11:43:08.826958   19301 main.go:141] libmachine: Creating Disk image...
	I0819 11:43:08.826964   19301 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:43:08.827136   19301 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2
	I0819 11:43:08.836295   19301 main.go:141] libmachine: STDOUT: 
	I0819 11:43:08.836318   19301 main.go:141] libmachine: STDERR: 
	I0819 11:43:08.836384   19301 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2 +20000M
	I0819 11:43:08.844209   19301 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:43:08.844229   19301 main.go:141] libmachine: STDERR: 
	I0819 11:43:08.844246   19301 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2
	I0819 11:43:08.844250   19301 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:43:08.844259   19301 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:43:08.844291   19301 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:5a:0d:0e:fb:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-flag-995000/disk.qcow2
	I0819 11:43:08.845882   19301 main.go:141] libmachine: STDOUT: 
	I0819 11:43:08.845904   19301 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:43:08.845917   19301 client.go:171] duration metric: took 390.720166ms to LocalClient.Create
	I0819 11:43:10.847342   19301 start.go:128] duration metric: took 2.447808583s to createHost
	I0819 11:43:10.847394   19301 start.go:83] releasing machines lock for "force-systemd-flag-995000", held for 2.4482945s
	W0819 11:43:10.847690   19301 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-995000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-995000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:10.857800   19301 out.go:201] 
	W0819 11:43:10.862944   19301 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:43:10.862971   19301 out.go:270] * 
	* 
	W0819 11:43:10.865517   19301 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:43:10.873723   19301 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-995000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-995000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-995000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.068833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-995000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-995000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-995000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-19 11:43:10.970426 -0700 PDT m=+715.155757001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-995000 -n force-systemd-flag-995000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-995000 -n force-systemd-flag-995000: exit status 7 (33.762208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-995000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-995000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-995000
--- FAIL: TestForceSystemdFlag (10.29s)

                                                
                                    
x
+
TestForceSystemdEnv (11.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-214000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-214000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.369909167s)

                                                
                                                
-- stdout --
	* [force-systemd-env-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-214000" primary control-plane node in "force-systemd-env-214000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:42:54.377737   19267 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:42:54.377862   19267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:42:54.377870   19267 out.go:358] Setting ErrFile to fd 2...
	I0819 11:42:54.377872   19267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:42:54.378014   19267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:42:54.379087   19267 out.go:352] Setting JSON to false
	I0819 11:42:54.395426   19267 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7941,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:42:54.395509   19267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:42:54.401390   19267 out.go:177] * [force-systemd-env-214000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:42:54.408278   19267 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:42:54.408382   19267 notify.go:220] Checking for updates...
	I0819 11:42:54.415252   19267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:42:54.418304   19267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:42:54.421317   19267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:42:54.424223   19267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:42:54.427280   19267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0819 11:42:54.430647   19267 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:42:54.430697   19267 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:42:54.435214   19267 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:42:54.442231   19267 start.go:297] selected driver: qemu2
	I0819 11:42:54.442237   19267 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:42:54.442243   19267 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:42:54.444638   19267 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:42:54.447284   19267 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:42:54.450393   19267 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:42:54.450414   19267 cni.go:84] Creating CNI manager for ""
	I0819 11:42:54.450430   19267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:42:54.450445   19267 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:42:54.450482   19267 start.go:340] cluster config:
	{Name:force-systemd-env-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:42:54.454213   19267 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:54.460200   19267 out.go:177] * Starting "force-systemd-env-214000" primary control-plane node in "force-systemd-env-214000" cluster
	I0819 11:42:54.464369   19267 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:42:54.464388   19267 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:42:54.464396   19267 cache.go:56] Caching tarball of preloaded images
	I0819 11:42:54.464456   19267 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:42:54.464480   19267 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:42:54.464546   19267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/force-systemd-env-214000/config.json ...
	I0819 11:42:54.464563   19267 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/force-systemd-env-214000/config.json: {Name:mk2f7c009a9bad1ff2eb5c0ebb904bb6d67f890a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:42:54.464782   19267 start.go:360] acquireMachinesLock for force-systemd-env-214000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:42:54.464835   19267 start.go:364] duration metric: took 43.125µs to acquireMachinesLock for "force-systemd-env-214000"
	I0819 11:42:54.464846   19267 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:42:54.464871   19267 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:42:54.473280   19267 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:42:54.490965   19267 start.go:159] libmachine.API.Create for "force-systemd-env-214000" (driver="qemu2")
	I0819 11:42:54.490986   19267 client.go:168] LocalClient.Create starting
	I0819 11:42:54.491051   19267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:42:54.491081   19267 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:54.491089   19267 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:54.491125   19267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:42:54.491150   19267 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:54.491158   19267 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:54.491526   19267 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:42:54.644032   19267 main.go:141] libmachine: Creating SSH key...
	I0819 11:42:54.710512   19267 main.go:141] libmachine: Creating Disk image...
	I0819 11:42:54.710518   19267 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:42:54.710703   19267 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2
	I0819 11:42:54.719922   19267 main.go:141] libmachine: STDOUT: 
	I0819 11:42:54.719941   19267 main.go:141] libmachine: STDERR: 
	I0819 11:42:54.720003   19267 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2 +20000M
	I0819 11:42:54.727887   19267 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:42:54.727903   19267 main.go:141] libmachine: STDERR: 
	I0819 11:42:54.727918   19267 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2
	I0819 11:42:54.727925   19267 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:42:54.727936   19267 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:42:54.727968   19267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:bb:f0:ac:05:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2
	I0819 11:42:54.729617   19267 main.go:141] libmachine: STDOUT: 
	I0819 11:42:54.729635   19267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:42:54.729654   19267 client.go:171] duration metric: took 238.664792ms to LocalClient.Create
	I0819 11:42:56.731757   19267 start.go:128] duration metric: took 2.266882958s to createHost
	I0819 11:42:56.731774   19267 start.go:83] releasing machines lock for "force-systemd-env-214000", held for 2.266945917s
	W0819 11:42:56.731790   19267 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:56.739297   19267 out.go:177] * Deleting "force-systemd-env-214000" in qemu2 ...
	W0819 11:42:56.748963   19267 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:56.748969   19267 start.go:729] Will try again in 5 seconds ...
	I0819 11:43:01.751186   19267 start.go:360] acquireMachinesLock for force-systemd-env-214000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:43:03.206011   19267 start.go:364] duration metric: took 1.454667833s to acquireMachinesLock for "force-systemd-env-214000"
	I0819 11:43:03.206155   19267 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-214000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:43:03.206415   19267 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:43:03.221320   19267 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0819 11:43:03.271422   19267 start.go:159] libmachine.API.Create for "force-systemd-env-214000" (driver="qemu2")
	I0819 11:43:03.271474   19267 client.go:168] LocalClient.Create starting
	I0819 11:43:03.271577   19267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:43:03.271638   19267 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:03.271655   19267 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:03.271710   19267 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:43:03.271754   19267 main.go:141] libmachine: Decoding PEM data...
	I0819 11:43:03.271768   19267 main.go:141] libmachine: Parsing certificate...
	I0819 11:43:03.272406   19267 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:43:03.464952   19267 main.go:141] libmachine: Creating SSH key...
	I0819 11:43:03.653164   19267 main.go:141] libmachine: Creating Disk image...
	I0819 11:43:03.653170   19267 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:43:03.653381   19267 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2
	I0819 11:43:03.663124   19267 main.go:141] libmachine: STDOUT: 
	I0819 11:43:03.663144   19267 main.go:141] libmachine: STDERR: 
	I0819 11:43:03.663197   19267 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2 +20000M
	I0819 11:43:03.671222   19267 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:43:03.671238   19267 main.go:141] libmachine: STDERR: 
	I0819 11:43:03.671261   19267 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2
	I0819 11:43:03.671265   19267 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:43:03.671277   19267 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:43:03.671316   19267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:41:eb:03:2c:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/force-systemd-env-214000/disk.qcow2
	I0819 11:43:03.672883   19267 main.go:141] libmachine: STDOUT: 
	I0819 11:43:03.672899   19267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:43:03.672912   19267 client.go:171] duration metric: took 401.434875ms to LocalClient.Create
	I0819 11:43:05.675178   19267 start.go:128] duration metric: took 2.468724375s to createHost
	I0819 11:43:05.675240   19267 start.go:83] releasing machines lock for "force-systemd-env-214000", held for 2.469181125s
	W0819 11:43:05.675615   19267 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:43:05.684327   19267 out.go:201] 
	W0819 11:43:05.693294   19267 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:43:05.693360   19267 out.go:270] * 
	* 
	W0819 11:43:05.695678   19267 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:43:05.704158   19267 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-214000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-214000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-214000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.096875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-214000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-214000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-214000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-19 11:43:05.80064 -0700 PDT m=+709.985946626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-214000 -n force-systemd-env-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-214000 -n force-systemd-env-214000: exit status 7 (33.107667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-214000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-214000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-214000
--- FAIL: TestForceSystemdEnv (11.56s)

                                                
                                    
x
+
TestErrorSpam/setup (9.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-555000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-555000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 --driver=qemu2 : exit status 80 (9.945907125s)

                                                
                                                
-- stdout --
	* [nospam-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-555000" primary control-plane node in "nospam-555000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-555000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-555000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-555000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-555000" primary control-plane node in "nospam-555000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-555000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.95s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-944000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-944000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.813710917s)

                                                
                                                
-- stdout --
	* [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-944000" primary control-plane node in "functional-944000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-944000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52949 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52949 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52949 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-944000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-944000" primary control-plane node in "functional-944000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-944000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52949 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52949 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52949 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (69.991833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-944000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-944000 --alsologtostderr -v=8: exit status 80 (5.185528583s)

                                                
                                                
-- stdout --
	* [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-944000" primary control-plane node in "functional-944000" cluster
	* Restarting existing qemu2 VM for "functional-944000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-944000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:32:14.909142   17928 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:32:14.909257   17928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:32:14.909263   17928 out.go:358] Setting ErrFile to fd 2...
	I0819 11:32:14.909267   17928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:32:14.909390   17928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:32:14.910313   17928 out.go:352] Setting JSON to false
	I0819 11:32:14.926888   17928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7301,"bootTime":1724085033,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:32:14.926968   17928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:32:14.931617   17928 out.go:177] * [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:32:14.937529   17928 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:32:14.937594   17928 notify.go:220] Checking for updates...
	I0819 11:32:14.944485   17928 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:32:14.947452   17928 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:32:14.950489   17928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:32:14.951877   17928 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:32:14.955510   17928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:32:14.958794   17928 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:32:14.958846   17928 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:32:14.963293   17928 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:32:14.970514   17928 start.go:297] selected driver: qemu2
	I0819 11:32:14.970521   17928 start.go:901] validating driver "qemu2" against &{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:32:14.970566   17928 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:32:14.972841   17928 cni.go:84] Creating CNI manager for ""
	I0819 11:32:14.972864   17928 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:32:14.972905   17928 start.go:340] cluster config:
	{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:32:14.976543   17928 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:32:14.985428   17928 out.go:177] * Starting "functional-944000" primary control-plane node in "functional-944000" cluster
	I0819 11:32:14.989514   17928 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:32:14.989531   17928 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:32:14.989539   17928 cache.go:56] Caching tarball of preloaded images
	I0819 11:32:14.989600   17928 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:32:14.989609   17928 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:32:14.989673   17928 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/functional-944000/config.json ...
	I0819 11:32:14.990117   17928 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:32:14.990148   17928 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "functional-944000"
	I0819 11:32:14.990157   17928 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:32:14.990163   17928 fix.go:54] fixHost starting: 
	I0819 11:32:14.990282   17928 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
	W0819 11:32:14.990290   17928 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:32:14.998432   17928 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
	I0819 11:32:15.002409   17928 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:32:15.002447   17928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
	I0819 11:32:15.004418   17928 main.go:141] libmachine: STDOUT: 
	I0819 11:32:15.004439   17928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:32:15.004466   17928 fix.go:56] duration metric: took 14.303042ms for fixHost
	I0819 11:32:15.004472   17928 start.go:83] releasing machines lock for "functional-944000", held for 14.3195ms
	W0819 11:32:15.004478   17928 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:32:15.004529   17928 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:32:15.004534   17928 start.go:729] Will try again in 5 seconds ...
	I0819 11:32:20.006664   17928 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:32:20.007046   17928 start.go:364] duration metric: took 315.791µs to acquireMachinesLock for "functional-944000"
	I0819 11:32:20.007172   17928 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:32:20.007193   17928 fix.go:54] fixHost starting: 
	I0819 11:32:20.007926   17928 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
	W0819 11:32:20.007953   17928 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:32:20.013414   17928 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
	I0819 11:32:20.021271   17928 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:32:20.021441   17928 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
	I0819 11:32:20.030605   17928 main.go:141] libmachine: STDOUT: 
	I0819 11:32:20.030738   17928 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:32:20.030832   17928 fix.go:56] duration metric: took 23.643ms for fixHost
	I0819 11:32:20.030857   17928 start.go:83] releasing machines lock for "functional-944000", held for 23.790375ms
	W0819 11:32:20.031059   17928 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:32:20.035883   17928 out.go:201] 
	W0819 11:32:20.040416   17928 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:32:20.040439   17928 out.go:270] * 
	* 
	W0819 11:32:20.043085   17928 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:32:20.051290   17928 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-944000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.186997084s for "functional-944000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (69.846292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.211125ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-944000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-944000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-944000 get po -A: exit status 1 (26.513041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-944000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-944000\n"*: args "kubectl --context functional-944000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-944000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.613666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl images: exit status 83 (40.916334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (42.545166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-944000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.088584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.840542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-944000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 kubectl -- --context functional-944000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 kubectl -- --context functional-944000 get pods: exit status 1 (746.606833ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-944000
	* no server found for cluster "functional-944000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-944000 kubectl -- --context functional-944000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (32.849959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-944000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-944000 get pods: exit status 1 (1.01994s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-944000
	* no server found for cluster "functional-944000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-944000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (31.140959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.05s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-944000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-944000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.186060375s)

                                                
                                                
-- stdout --
	* [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-944000" primary control-plane node in "functional-944000" cluster
	* Restarting existing qemu2 VM for "functional-944000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-944000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-944000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.186653833s for "functional-944000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (69.613709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-944000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-944000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.830833ms)

                                                
                                                
** stderr ** 
	error: context "functional-944000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-944000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (31.3555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 logs: exit status 83 (78.71675ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | -p download-only-927000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| delete  | -p download-only-927000                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| start   | -o=json --download-only                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | -p download-only-333000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| delete  | -p download-only-333000                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| delete  | -p download-only-927000                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| delete  | -p download-only-333000                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| start   | --download-only -p                                                       | binary-mirror-275000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | binary-mirror-275000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52924                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-275000                                                  | binary-mirror-275000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| addons  | enable dashboard -p                                                      | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | addons-698000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | addons-698000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-698000 --wait=true                                             | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-698000                                                         | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| start   | -p nospam-555000 -n=1 --memory=2250 --wait=false                         | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:32 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-555000                                                         | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | minikube-local-cache-test:functional-944000                              |                      |         |         |                     |                     |
	| cache   | functional-944000 cache delete                                           | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | minikube-local-cache-test:functional-944000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	| ssh     | functional-944000 ssh sudo                                               | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-944000                                                        | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-944000 ssh                                                    | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-944000 cache reload                                           | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	| ssh     | functional-944000 ssh                                                    | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-944000 kubectl --                                             | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | --context functional-944000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:32:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:32:25.423266   18003 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:32:25.423395   18003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:32:25.423397   18003 out.go:358] Setting ErrFile to fd 2...
	I0819 11:32:25.423399   18003 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:32:25.423524   18003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:32:25.424586   18003 out.go:352] Setting JSON to false
	I0819 11:32:25.440755   18003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7312,"bootTime":1724085033,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:32:25.440821   18003 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:32:25.446697   18003 out.go:177] * [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:32:25.455742   18003 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:32:25.455786   18003 notify.go:220] Checking for updates...
	I0819 11:32:25.464545   18003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:32:25.467630   18003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:32:25.470612   18003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:32:25.473647   18003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:32:25.476579   18003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:32:25.479906   18003 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:32:25.479953   18003 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:32:25.483472   18003 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:32:25.490600   18003 start.go:297] selected driver: qemu2
	I0819 11:32:25.490606   18003 start.go:901] validating driver "qemu2" against &{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:32:25.490674   18003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:32:25.492893   18003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:32:25.492930   18003 cni.go:84] Creating CNI manager for ""
	I0819 11:32:25.492937   18003 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:32:25.492975   18003 start.go:340] cluster config:
	{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:32:25.496608   18003 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:32:25.504527   18003 out.go:177] * Starting "functional-944000" primary control-plane node in "functional-944000" cluster
	I0819 11:32:25.508575   18003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:32:25.508591   18003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:32:25.508603   18003 cache.go:56] Caching tarball of preloaded images
	I0819 11:32:25.508670   18003 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:32:25.508674   18003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:32:25.508739   18003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/functional-944000/config.json ...
	I0819 11:32:25.509225   18003 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:32:25.509261   18003 start.go:364] duration metric: took 31.834µs to acquireMachinesLock for "functional-944000"
	I0819 11:32:25.509270   18003 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:32:25.509273   18003 fix.go:54] fixHost starting: 
	I0819 11:32:25.509405   18003 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
	W0819 11:32:25.509412   18003 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:32:25.516591   18003 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
	I0819 11:32:25.520523   18003 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:32:25.520567   18003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
	I0819 11:32:25.522606   18003 main.go:141] libmachine: STDOUT: 
	I0819 11:32:25.522626   18003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:32:25.522657   18003 fix.go:56] duration metric: took 13.38325ms for fixHost
	I0819 11:32:25.522660   18003 start.go:83] releasing machines lock for "functional-944000", held for 13.395292ms
	W0819 11:32:25.522666   18003 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:32:25.522697   18003 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:32:25.522702   18003 start.go:729] Will try again in 5 seconds ...
	I0819 11:32:30.524893   18003 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:32:30.525279   18003 start.go:364] duration metric: took 296.584µs to acquireMachinesLock for "functional-944000"
	I0819 11:32:30.525424   18003 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:32:30.525435   18003 fix.go:54] fixHost starting: 
	I0819 11:32:30.526138   18003 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
	W0819 11:32:30.526157   18003 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:32:30.531582   18003 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
	I0819 11:32:30.535418   18003 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:32:30.535642   18003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
	I0819 11:32:30.544275   18003 main.go:141] libmachine: STDOUT: 
	I0819 11:32:30.544397   18003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:32:30.544493   18003 fix.go:56] duration metric: took 19.055041ms for fixHost
	I0819 11:32:30.544506   18003 start.go:83] releasing machines lock for "functional-944000", held for 19.210625ms
	W0819 11:32:30.544703   18003 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:32:30.553574   18003 out.go:201] 
	W0819 11:32:30.557647   18003 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:32:30.557677   18003 out.go:270] * 
	W0819 11:32:30.560567   18003 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:32:30.567564   18003 out.go:201] 
	
	
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-944000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | -p download-only-927000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-927000                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| start   | -o=json --download-only                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | -p download-only-333000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-333000                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-927000                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-333000                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| start   | --download-only -p                                                       | binary-mirror-275000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | binary-mirror-275000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52924                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-275000                                                  | binary-mirror-275000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| addons  | enable dashboard -p                                                      | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | addons-698000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | addons-698000                                                            |                      |         |         |                     |                     |
| start   | -p addons-698000 --wait=true                                             | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-698000                                                         | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| start   | -p nospam-555000 -n=1 --memory=2250 --wait=false                         | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-555000                                                         | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | minikube-local-cache-test:functional-944000                              |                      |         |         |                     |                     |
| cache   | functional-944000 cache delete                                           | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | minikube-local-cache-test:functional-944000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
| ssh     | functional-944000 ssh sudo                                               | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-944000                                                        | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-944000 ssh                                                    | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-944000 cache reload                                           | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
| ssh     | functional-944000 ssh                                                    | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-944000 kubectl --                                             | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --context functional-944000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/19 11:32:25
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0819 11:32:25.423266   18003 out.go:345] Setting OutFile to fd 1 ...
I0819 11:32:25.423395   18003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:32:25.423397   18003 out.go:358] Setting ErrFile to fd 2...
I0819 11:32:25.423399   18003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:32:25.423524   18003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:32:25.424586   18003 out.go:352] Setting JSON to false
I0819 11:32:25.440755   18003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7312,"bootTime":1724085033,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0819 11:32:25.440821   18003 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0819 11:32:25.446697   18003 out.go:177] * [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0819 11:32:25.455742   18003 out.go:177]   - MINIKUBE_LOCATION=19423
I0819 11:32:25.455786   18003 notify.go:220] Checking for updates...
I0819 11:32:25.464545   18003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
I0819 11:32:25.467630   18003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0819 11:32:25.470612   18003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0819 11:32:25.473647   18003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
I0819 11:32:25.476579   18003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0819 11:32:25.479906   18003 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:32:25.479953   18003 driver.go:394] Setting default libvirt URI to qemu:///system
I0819 11:32:25.483472   18003 out.go:177] * Using the qemu2 driver based on existing profile
I0819 11:32:25.490600   18003 start.go:297] selected driver: qemu2
I0819 11:32:25.490606   18003 start.go:901] validating driver "qemu2" against &{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:32:25.490674   18003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0819 11:32:25.492893   18003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0819 11:32:25.492930   18003 cni.go:84] Creating CNI manager for ""
I0819 11:32:25.492937   18003 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0819 11:32:25.492975   18003 start.go:340] cluster config:
{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:32:25.496608   18003 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0819 11:32:25.504527   18003 out.go:177] * Starting "functional-944000" primary control-plane node in "functional-944000" cluster
I0819 11:32:25.508575   18003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 11:32:25.508591   18003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 11:32:25.508603   18003 cache.go:56] Caching tarball of preloaded images
I0819 11:32:25.508670   18003 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 11:32:25.508674   18003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 11:32:25.508739   18003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/functional-944000/config.json ...
I0819 11:32:25.509225   18003 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:32:25.509261   18003 start.go:364] duration metric: took 31.834µs to acquireMachinesLock for "functional-944000"
I0819 11:32:25.509270   18003 start.go:96] Skipping create...Using existing machine configuration
I0819 11:32:25.509273   18003 fix.go:54] fixHost starting: 
I0819 11:32:25.509405   18003 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
W0819 11:32:25.509412   18003 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:32:25.516591   18003 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
I0819 11:32:25.520523   18003 qemu.go:418] Using hvf for hardware acceleration
I0819 11:32:25.520567   18003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
I0819 11:32:25.522606   18003 main.go:141] libmachine: STDOUT: 
I0819 11:32:25.522626   18003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:32:25.522657   18003 fix.go:56] duration metric: took 13.38325ms for fixHost
I0819 11:32:25.522660   18003 start.go:83] releasing machines lock for "functional-944000", held for 13.395292ms
W0819 11:32:25.522666   18003 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:32:25.522697   18003 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:32:25.522702   18003 start.go:729] Will try again in 5 seconds ...
I0819 11:32:30.524893   18003 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:32:30.525279   18003 start.go:364] duration metric: took 296.584µs to acquireMachinesLock for "functional-944000"
I0819 11:32:30.525424   18003 start.go:96] Skipping create...Using existing machine configuration
I0819 11:32:30.525435   18003 fix.go:54] fixHost starting: 
I0819 11:32:30.526138   18003 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
W0819 11:32:30.526157   18003 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:32:30.531582   18003 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
I0819 11:32:30.535418   18003 qemu.go:418] Using hvf for hardware acceleration
I0819 11:32:30.535642   18003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
I0819 11:32:30.544275   18003 main.go:141] libmachine: STDOUT: 
I0819 11:32:30.544397   18003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:32:30.544493   18003 fix.go:56] duration metric: took 19.055041ms for fixHost
I0819 11:32:30.544506   18003 start.go:83] releasing machines lock for "functional-944000", held for 19.210625ms
W0819 11:32:30.544703   18003 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:32:30.553574   18003 out.go:201] 
W0819 11:32:30.557647   18003 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:32:30.557677   18003 out.go:270] * 
W0819 11:32:30.560567   18003 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:32:30.567564   18003 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1852374041/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | -p download-only-927000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-927000                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| start   | -o=json --download-only                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | -p download-only-333000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-333000                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-927000                                                  | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| delete  | -p download-only-333000                                                  | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| start   | --download-only -p                                                       | binary-mirror-275000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | binary-mirror-275000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52924                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-275000                                                  | binary-mirror-275000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| addons  | enable dashboard -p                                                      | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | addons-698000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | addons-698000                                                            |                      |         |         |                     |                     |
| start   | -p addons-698000 --wait=true                                             | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-698000                                                         | addons-698000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
| start   | -p nospam-555000 -n=1 --memory=2250 --wait=false                         | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-555000 --log_dir                                                  | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-555000                                                         | nospam-555000        | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-944000 cache add                                              | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | minikube-local-cache-test:functional-944000                              |                      |         |         |                     |                     |
| cache   | functional-944000 cache delete                                           | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | minikube-local-cache-test:functional-944000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
| ssh     | functional-944000 ssh sudo                                               | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-944000                                                        | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-944000 ssh                                                    | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-944000 cache reload                                           | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
| ssh     | functional-944000 ssh                                                    | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT | 19 Aug 24 11:32 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-944000 kubectl --                                             | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --context functional-944000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-944000                                                     | functional-944000    | jenkins | v1.33.1 | 19 Aug 24 11:32 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/19 11:32:25
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0819 11:32:25.423266   18003 out.go:345] Setting OutFile to fd 1 ...
I0819 11:32:25.423395   18003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:32:25.423397   18003 out.go:358] Setting ErrFile to fd 2...
I0819 11:32:25.423399   18003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:32:25.423524   18003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:32:25.424586   18003 out.go:352] Setting JSON to false
I0819 11:32:25.440755   18003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7312,"bootTime":1724085033,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0819 11:32:25.440821   18003 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0819 11:32:25.446697   18003 out.go:177] * [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0819 11:32:25.455742   18003 out.go:177]   - MINIKUBE_LOCATION=19423
I0819 11:32:25.455786   18003 notify.go:220] Checking for updates...
I0819 11:32:25.464545   18003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
I0819 11:32:25.467630   18003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0819 11:32:25.470612   18003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0819 11:32:25.473647   18003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
I0819 11:32:25.476579   18003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0819 11:32:25.479906   18003 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:32:25.479953   18003 driver.go:394] Setting default libvirt URI to qemu:///system
I0819 11:32:25.483472   18003 out.go:177] * Using the qemu2 driver based on existing profile
I0819 11:32:25.490600   18003 start.go:297] selected driver: qemu2
I0819 11:32:25.490606   18003 start.go:901] validating driver "qemu2" against &{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:32:25.490674   18003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0819 11:32:25.492893   18003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0819 11:32:25.492930   18003 cni.go:84] Creating CNI manager for ""
I0819 11:32:25.492937   18003 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0819 11:32:25.492975   18003 start.go:340] cluster config:
{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0819 11:32:25.496608   18003 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0819 11:32:25.504527   18003 out.go:177] * Starting "functional-944000" primary control-plane node in "functional-944000" cluster
I0819 11:32:25.508575   18003 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0819 11:32:25.508591   18003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0819 11:32:25.508603   18003 cache.go:56] Caching tarball of preloaded images
I0819 11:32:25.508670   18003 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0819 11:32:25.508674   18003 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0819 11:32:25.508739   18003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/functional-944000/config.json ...
I0819 11:32:25.509225   18003 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:32:25.509261   18003 start.go:364] duration metric: took 31.834µs to acquireMachinesLock for "functional-944000"
I0819 11:32:25.509270   18003 start.go:96] Skipping create...Using existing machine configuration
I0819 11:32:25.509273   18003 fix.go:54] fixHost starting: 
I0819 11:32:25.509405   18003 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
W0819 11:32:25.509412   18003 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:32:25.516591   18003 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
I0819 11:32:25.520523   18003 qemu.go:418] Using hvf for hardware acceleration
I0819 11:32:25.520567   18003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
I0819 11:32:25.522606   18003 main.go:141] libmachine: STDOUT: 
I0819 11:32:25.522626   18003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:32:25.522657   18003 fix.go:56] duration metric: took 13.38325ms for fixHost
I0819 11:32:25.522660   18003 start.go:83] releasing machines lock for "functional-944000", held for 13.395292ms
W0819 11:32:25.522666   18003 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:32:25.522697   18003 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:32:25.522702   18003 start.go:729] Will try again in 5 seconds ...
I0819 11:32:30.524893   18003 start.go:360] acquireMachinesLock for functional-944000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0819 11:32:30.525279   18003 start.go:364] duration metric: took 296.584µs to acquireMachinesLock for "functional-944000"
I0819 11:32:30.525424   18003 start.go:96] Skipping create...Using existing machine configuration
I0819 11:32:30.525435   18003 fix.go:54] fixHost starting: 
I0819 11:32:30.526138   18003 fix.go:112] recreateIfNeeded on functional-944000: state=Stopped err=<nil>
W0819 11:32:30.526157   18003 fix.go:138] unexpected machine state, will restart: <nil>
I0819 11:32:30.531582   18003 out.go:177] * Restarting existing qemu2 VM for "functional-944000" ...
I0819 11:32:30.535418   18003 qemu.go:418] Using hvf for hardware acceleration
I0819 11:32:30.535642   18003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:f8:78:c1:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/functional-944000/disk.qcow2
I0819 11:32:30.544275   18003 main.go:141] libmachine: STDOUT: 
I0819 11:32:30.544397   18003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0819 11:32:30.544493   18003 fix.go:56] duration metric: took 19.055041ms for fixHost
I0819 11:32:30.544506   18003 start.go:83] releasing machines lock for "functional-944000", held for 19.210625ms
W0819 11:32:30.544703   18003 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-944000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0819 11:32:30.553574   18003 out.go:201] 
W0819 11:32:30.557647   18003 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0819 11:32:30.557677   18003 out.go:270] * 
W0819 11:32:30.560567   18003 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:32:30.567564   18003 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-944000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-944000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.519417ms)

                                                
                                                
** stderr ** 
	error: context "functional-944000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-944000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-944000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-944000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-944000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-944000 --alsologtostderr -v=1] stderr:
I0819 11:33:07.999293   18226 out.go:345] Setting OutFile to fd 1 ...
I0819 11:33:07.999681   18226 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:07.999685   18226 out.go:358] Setting ErrFile to fd 2...
I0819 11:33:07.999687   18226 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:07.999818   18226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:33:08.000100   18226 mustload.go:65] Loading cluster: functional-944000
I0819 11:33:08.000309   18226 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:08.002156   18226 out.go:177] * The control-plane node functional-944000 host is not running: state=Stopped
I0819 11:33:08.004914   18226 out.go:177]   To start a cluster, run: "minikube start -p functional-944000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (42.043667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 status: exit status 7 (73.012583ms)

                                                
                                                
-- stdout --
	functional-944000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-944000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.326625ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-944000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 status -o json: exit status 7 (30.721834ms)

                                                
                                                
-- stdout --
	{"Name":"functional-944000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-944000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (29.700708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-944000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-944000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.6625ms)

                                                
                                                
** stderr ** 
	error: context "functional-944000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-944000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-944000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-944000 describe po hello-node-connect: exit status 1 (26.223375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-944000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-944000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-944000 logs -l app=hello-node-connect: exit status 1 (26.52775ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-944000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-944000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-944000 describe svc hello-node-connect: exit status 1 (26.318917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-944000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-944000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (36.765417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "echo hello": exit status 83 (41.122541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n"*. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "cat /etc/hostname": exit status 83 (39.680292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-944000"- but got *"* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n"*. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.021291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (52.2755ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-944000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.150042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-944000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-944000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cp functional-944000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1413637974/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 cp functional-944000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1413637974/001/cp-test.txt: exit status 83 (40.519625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-944000 cp functional-944000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1413637974/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.801709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1413637974/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (43.050125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-944000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (45.508125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-944000 ssh -n functional-944000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-944000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-944000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17654/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/test/nested/copy/17654/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/test/nested/copy/17654/hosts": exit status 83 (52.293917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/test/nested/copy/17654/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-944000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-944000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.383542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17654.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/17654.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/17654.pem": exit status 83 (42.341292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/17654.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo cat /etc/ssl/certs/17654.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/17654.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-944000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-944000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17654.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /usr/share/ca-certificates/17654.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /usr/share/ca-certificates/17654.pem": exit status 83 (44.623125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/17654.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo cat /usr/share/ca-certificates/17654.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/17654.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-944000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-944000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (41.666125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-944000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-944000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/176542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/176542.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/176542.pem": exit status 83 (46.764958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/176542.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo cat /etc/ssl/certs/176542.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/176542.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-944000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-944000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/176542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /usr/share/ca-certificates/176542.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /usr/share/ca-certificates/176542.pem": exit status 83 (38.822416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/176542.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo cat /usr/share/ca-certificates/176542.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/176542.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-944000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-944000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (46.505125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-944000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-944000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.668916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-944000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-944000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.210833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-944000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-944000 -n functional-944000: exit status 7 (30.475333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-944000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo systemctl is-active crio": exit status 83 (41.513166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-944000 docker-env) && out/minikube-darwin-arm64 status -p functional-944000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-944000 docker-env) && out/minikube-darwin-arm64 status -p functional-944000": exit status 1 (51.642541ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2: exit status 83 (44.692292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:33:12.323304   18313 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:33:12.324088   18313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:12.324092   18313 out.go:358] Setting ErrFile to fd 2...
	I0819 11:33:12.324094   18313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:12.324254   18313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:33:12.324444   18313 mustload.go:65] Loading cluster: functional-944000
	I0819 11:33:12.324634   18313 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:33:12.329745   18313 out.go:177] * The control-plane node functional-944000 host is not running: state=Stopped
	I0819 11:33:12.333717   18313 out.go:177]   To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2: exit status 83 (42.584875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:33:12.411207   18318 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:33:12.411349   18318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:12.411352   18318 out.go:358] Setting ErrFile to fd 2...
	I0819 11:33:12.411354   18318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:12.411472   18318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:33:12.411657   18318 mustload.go:65] Loading cluster: functional-944000
	I0819 11:33:12.411833   18318 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:33:12.416731   18318 out.go:177] * The control-plane node functional-944000 host is not running: state=Stopped
	I0819 11:33:12.420697   18318 out.go:177]   To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2: exit status 83 (42.462667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:33:12.368072   18316 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:33:12.368211   18316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:12.368214   18316 out.go:358] Setting ErrFile to fd 2...
	I0819 11:33:12.368217   18316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:12.368328   18316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:33:12.368544   18316 mustload.go:65] Loading cluster: functional-944000
	I0819 11:33:12.368754   18316 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:33:12.373716   18316 out.go:177] * The control-plane node functional-944000 host is not running: state=Stopped
	I0819 11:33:12.377734   18316 out.go:177]   To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-944000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0819 11:32:31.425942   18075 out.go:345] Setting OutFile to fd 1 ...
I0819 11:32:31.426355   18075 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:32:31.426399   18075 out.go:358] Setting ErrFile to fd 2...
I0819 11:32:31.426407   18075 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:32:31.426899   18075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:32:31.427417   18075 mustload.go:65] Loading cluster: functional-944000
I0819 11:32:31.427625   18075 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:32:31.433280   18075 out.go:177] * The control-plane node functional-944000 host is not running: state=Stopped
I0819 11:32:31.434402   18075 out.go:177]   To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
stdout: * The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 18074: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-944000": client config: context "functional-944000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (101.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-944000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-944000 get svc nginx-svc: exit status 1 (70.608ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-944000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-944000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (101.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-944000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-944000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.305375ms)

                                                
                                                
** stderr ** 
	error: context "functional-944000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-944000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 service list: exit status 83 (42.617042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-944000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 service list -o json: exit status 83 (43.816417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-944000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 service --namespace=default --https --url hello-node: exit status 83 (40.734084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-944000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 service hello-node --url --format={{.IP}}: exit status 83 (41.801833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-944000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 service hello-node --url: exit status 83 (42.983375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-944000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:1569: failed to parse "* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"": parse "* The control-plane node functional-944000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-944000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 version -o=json --components: exit status 83 (41.94475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-944000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-944000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-944000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-944000 image ls --format short --alsologtostderr:
I0819 11:33:12.532411   18325 out.go:345] Setting OutFile to fd 1 ...
I0819 11:33:12.532582   18325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.532585   18325 out.go:358] Setting ErrFile to fd 2...
I0819 11:33:12.532588   18325 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.532706   18325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:33:12.533120   18325 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:12.533189   18325 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-944000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-944000 image ls --format table --alsologtostderr:
I0819 11:33:12.752640   18337 out.go:345] Setting OutFile to fd 1 ...
I0819 11:33:12.752783   18337 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.752785   18337 out.go:358] Setting ErrFile to fd 2...
I0819 11:33:12.752787   18337 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.752911   18337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:33:12.753301   18337 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:12.753361   18337 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-944000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-944000 image ls --format json --alsologtostderr:
I0819 11:33:12.716966   18335 out.go:345] Setting OutFile to fd 1 ...
I0819 11:33:12.717085   18335 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.717088   18335 out.go:358] Setting ErrFile to fd 2...
I0819 11:33:12.717090   18335 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.717214   18335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:33:12.717607   18335 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:12.717664   18335 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-944000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-944000 image ls --format yaml --alsologtostderr:
I0819 11:33:12.568879   18327 out.go:345] Setting OutFile to fd 1 ...
I0819 11:33:12.569016   18327 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.569019   18327 out.go:358] Setting ErrFile to fd 2...
I0819 11:33:12.569022   18327 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.569153   18327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:33:12.569593   18327 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:12.569654   18327 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh pgrep buildkitd: exit status 83 (39.493625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image build -t localhost/my-image:functional-944000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-944000 image build -t localhost/my-image:functional-944000 testdata/build --alsologtostderr:
I0819 11:33:12.644344   18331 out.go:345] Setting OutFile to fd 1 ...
I0819 11:33:12.645005   18331 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.645012   18331 out.go:358] Setting ErrFile to fd 2...
I0819 11:33:12.645015   18331 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:33:12.645128   18331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:33:12.645543   18331 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:12.646128   18331 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:33:12.646369   18331 build_images.go:133] succeeded building to: 
I0819 11:33:12.646373   18331 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls
functional_test.go:446: expected "localhost/my-image:functional-944000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image load --daemon kicbase/echo-server:functional-944000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-944000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image load --daemon kicbase/echo-server:functional-944000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-944000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-944000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image load --daemon kicbase/echo-server:functional-944000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-944000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image save kicbase/echo-server:functional-944000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-944000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030571583s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (35.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (35.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-006000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-006000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.883883959s)

                                                
                                                
-- stdout --
	* [ha-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-006000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:35:14.347089   18383 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:35:14.347263   18383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:35:14.347273   18383 out.go:358] Setting ErrFile to fd 2...
	I0819 11:35:14.347276   18383 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:35:14.347538   18383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:35:14.348835   18383 out.go:352] Setting JSON to false
	I0819 11:35:14.365400   18383 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7481,"bootTime":1724085033,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:35:14.365472   18383 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:35:14.371459   18383 out.go:177] * [ha-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:35:14.379379   18383 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:35:14.379429   18383 notify.go:220] Checking for updates...
	I0819 11:35:14.385481   18383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:35:14.386742   18383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:35:14.389468   18383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:35:14.392461   18383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:35:14.395540   18383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:35:14.398589   18383 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:35:14.402498   18383 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:35:14.409392   18383 start.go:297] selected driver: qemu2
	I0819 11:35:14.409399   18383 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:35:14.409404   18383 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:35:14.411520   18383 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:35:14.414529   18383 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:35:14.417599   18383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:35:14.417656   18383 cni.go:84] Creating CNI manager for ""
	I0819 11:35:14.417662   18383 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 11:35:14.417666   18383 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:35:14.417692   18383 start.go:340] cluster config:
	{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:35:14.421244   18383 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:35:14.430517   18383 out.go:177] * Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	I0819 11:35:14.434368   18383 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:35:14.434381   18383 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:35:14.434389   18383 cache.go:56] Caching tarball of preloaded images
	I0819 11:35:14.434443   18383 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:35:14.434448   18383 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:35:14.434650   18383 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/ha-006000/config.json ...
	I0819 11:35:14.434662   18383 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/ha-006000/config.json: {Name:mk35994dbf774cf563dd8b23fd4c36f6c31430c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:35:14.434989   18383 start.go:360] acquireMachinesLock for ha-006000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:35:14.435023   18383 start.go:364] duration metric: took 28µs to acquireMachinesLock for "ha-006000"
	I0819 11:35:14.435035   18383 start.go:93] Provisioning new machine with config: &{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:35:14.435062   18383 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:35:14.441472   18383 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:35:14.458429   18383 start.go:159] libmachine.API.Create for "ha-006000" (driver="qemu2")
	I0819 11:35:14.458454   18383 client.go:168] LocalClient.Create starting
	I0819 11:35:14.458520   18383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:35:14.458550   18383 main.go:141] libmachine: Decoding PEM data...
	I0819 11:35:14.458560   18383 main.go:141] libmachine: Parsing certificate...
	I0819 11:35:14.458598   18383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:35:14.458620   18383 main.go:141] libmachine: Decoding PEM data...
	I0819 11:35:14.458628   18383 main.go:141] libmachine: Parsing certificate...
	I0819 11:35:14.459039   18383 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:35:14.639550   18383 main.go:141] libmachine: Creating SSH key...
	I0819 11:35:14.756124   18383 main.go:141] libmachine: Creating Disk image...
	I0819 11:35:14.756129   18383 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:35:14.756327   18383 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:35:14.765600   18383 main.go:141] libmachine: STDOUT: 
	I0819 11:35:14.765620   18383 main.go:141] libmachine: STDERR: 
	I0819 11:35:14.765681   18383 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2 +20000M
	I0819 11:35:14.773570   18383 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:35:14.773587   18383 main.go:141] libmachine: STDERR: 
	I0819 11:35:14.773604   18383 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:35:14.773607   18383 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:35:14.773614   18383 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:35:14.773642   18383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:2f:1b:17:db:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:35:14.775227   18383 main.go:141] libmachine: STDOUT: 
	I0819 11:35:14.775242   18383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:35:14.775261   18383 client.go:171] duration metric: took 316.803708ms to LocalClient.Create
	I0819 11:35:16.777471   18383 start.go:128] duration metric: took 2.342398084s to createHost
	I0819 11:35:16.777545   18383 start.go:83] releasing machines lock for "ha-006000", held for 2.34252325s
	W0819 11:35:16.777613   18383 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:35:16.790913   18383 out.go:177] * Deleting "ha-006000" in qemu2 ...
	W0819 11:35:16.818509   18383 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:35:16.818537   18383 start.go:729] Will try again in 5 seconds ...
	I0819 11:35:21.820766   18383 start.go:360] acquireMachinesLock for ha-006000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:35:21.821228   18383 start.go:364] duration metric: took 356.458µs to acquireMachinesLock for "ha-006000"
	I0819 11:35:21.821369   18383 start.go:93] Provisioning new machine with config: &{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:35:21.821659   18383 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:35:21.831343   18383 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:35:21.882687   18383 start.go:159] libmachine.API.Create for "ha-006000" (driver="qemu2")
	I0819 11:35:21.882737   18383 client.go:168] LocalClient.Create starting
	I0819 11:35:21.882855   18383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:35:21.882915   18383 main.go:141] libmachine: Decoding PEM data...
	I0819 11:35:21.882932   18383 main.go:141] libmachine: Parsing certificate...
	I0819 11:35:21.883007   18383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:35:21.883052   18383 main.go:141] libmachine: Decoding PEM data...
	I0819 11:35:21.883066   18383 main.go:141] libmachine: Parsing certificate...
	I0819 11:35:21.883769   18383 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:35:22.046563   18383 main.go:141] libmachine: Creating SSH key...
	I0819 11:35:22.137123   18383 main.go:141] libmachine: Creating Disk image...
	I0819 11:35:22.137130   18383 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:35:22.137318   18383 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:35:22.146410   18383 main.go:141] libmachine: STDOUT: 
	I0819 11:35:22.146428   18383 main.go:141] libmachine: STDERR: 
	I0819 11:35:22.146486   18383 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2 +20000M
	I0819 11:35:22.154540   18383 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:35:22.154554   18383 main.go:141] libmachine: STDERR: 
	I0819 11:35:22.154563   18383 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:35:22.154569   18383 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:35:22.154579   18383 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:35:22.154613   18383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:22:df:dd:14:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:35:22.156236   18383 main.go:141] libmachine: STDOUT: 
	I0819 11:35:22.156256   18383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:35:22.156267   18383 client.go:171] duration metric: took 273.525333ms to LocalClient.Create
	I0819 11:35:24.158435   18383 start.go:128] duration metric: took 2.336731s to createHost
	I0819 11:35:24.158496   18383 start.go:83] releasing machines lock for "ha-006000", held for 2.337253166s
	W0819 11:35:24.158832   18383 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:35:24.169400   18383 out.go:201] 
	W0819 11:35:24.174524   18383 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:35:24.174568   18383 out.go:270] * 
	* 
	W0819 11:35:24.177085   18383 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:35:24.187375   18383 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-006000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (67.62525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (66.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.202875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-006000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- rollout status deployment/busybox: exit status 1 (57.838708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.153666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.994208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.629042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.450417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.289042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.509916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.633084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.945167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.516458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.288791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.185ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.82625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.345459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.388084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (31.693917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (66.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-006000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.836458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-006000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.846416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-006000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-006000 -v=7 --alsologtostderr: exit status 83 (42.911416ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-006000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:30.750589   18486 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:30.751183   18486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:30.751187   18486 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:30.751189   18486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:30.751369   18486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:30.751588   18486 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:30.751800   18486 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:30.755946   18486 out.go:177] * The control-plane node ha-006000 host is not running: state=Stopped
	I0819 11:36:30.759863   18486 out.go:177]   To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-006000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (29.808459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-006000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-006000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.296208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-006000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-006000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-006000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.824167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-006000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-006000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status --output json -v=7 --alsologtostderr: exit status 7 (30.329417ms)

                                                
                                                
-- stdout --
	{"Name":"ha-006000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:30.958496   18498 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:30.958633   18498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:30.958636   18498 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:30.958639   18498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:30.958757   18498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:30.958881   18498 out.go:352] Setting JSON to true
	I0819 11:36:30.958891   18498 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:30.958940   18498 notify.go:220] Checking for updates...
	I0819 11:36:30.959079   18498 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:30.959086   18498 status.go:255] checking status of ha-006000 ...
	I0819 11:36:30.959290   18498 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:30.959294   18498 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:30.959296   18498 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-006000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.738583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.3725ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:31.020619   18502 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:31.021238   18502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.021246   18502 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:31.021249   18502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.021422   18502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:31.021642   18502 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:31.021854   18502 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:31.025006   18502 out.go:201] 
	W0819 11:36:31.029037   18502 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0819 11:36:31.029041   18502 out.go:270] * 
	* 
	W0819 11:36:31.031465   18502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:36:31.035053   18502 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-006000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (30.553333ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:31.067889   18504 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:31.068042   18504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.068045   18504 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:31.068048   18504 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.068175   18504 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:31.068296   18504 out.go:352] Setting JSON to false
	I0819 11:36:31.068307   18504 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:31.068367   18504 notify.go:220] Checking for updates...
	I0819 11:36:31.068509   18504 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:31.068517   18504 status.go:255] checking status of ha-006000 ...
	I0819 11:36:31.068740   18504 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:31.068743   18504 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:31.068746   18504 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.836208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-006000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.444667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.17675ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:31.208579   18513 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:31.208956   18513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.208963   18513 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:31.208966   18513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.209101   18513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:31.209315   18513 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:31.209502   18513 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:31.214091   18513 out.go:201] 
	W0819 11:36:31.217991   18513 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0819 11:36:31.217996   18513 out.go:270] * 
	* 
	W0819 11:36:31.220232   18513 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:36:31.223969   18513 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0819 11:36:31.208579   18513 out.go:345] Setting OutFile to fd 1 ...
I0819 11:36:31.208956   18513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:36:31.208963   18513 out.go:358] Setting ErrFile to fd 2...
I0819 11:36:31.208966   18513 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:36:31.209101   18513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:36:31.209315   18513 mustload.go:65] Loading cluster: ha-006000
I0819 11:36:31.209502   18513 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:36:31.214091   18513 out.go:201] 
W0819 11:36:31.217991   18513 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0819 11:36:31.217996   18513 out.go:270] * 
* 
W0819 11:36:31.220232   18513 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:36:31.223969   18513 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-006000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (31.118667ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:31.258441   18515 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:31.258585   18515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.258588   18515 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:31.258590   18515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:31.258719   18515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:31.258850   18515 out.go:352] Setting JSON to false
	I0819 11:36:31.258863   18515 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:31.258919   18515 notify.go:220] Checking for updates...
	I0819 11:36:31.259076   18515 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:31.259083   18515 status.go:255] checking status of ha-006000 ...
	I0819 11:36:31.259310   18515 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:31.259314   18515 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:31.259316   18515 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (76.055292ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:32.159970   18517 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:32.160155   18517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:32.160160   18517 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:32.160163   18517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:32.160359   18517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:32.160530   18517 out.go:352] Setting JSON to false
	I0819 11:36:32.160544   18517 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:32.160589   18517 notify.go:220] Checking for updates...
	I0819 11:36:32.160822   18517 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:32.160836   18517 status.go:255] checking status of ha-006000 ...
	I0819 11:36:32.161152   18517 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:32.161157   18517 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:32.161160   18517 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (73.965625ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:34.252520   18521 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:34.252727   18521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:34.252732   18521 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:34.252735   18521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:34.252914   18521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:34.253085   18521 out.go:352] Setting JSON to false
	I0819 11:36:34.253098   18521 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:34.253144   18521 notify.go:220] Checking for updates...
	I0819 11:36:34.253377   18521 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:34.253386   18521 status.go:255] checking status of ha-006000 ...
	I0819 11:36:34.253660   18521 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:34.253665   18521 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:34.253667   18521 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (72.993625ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:36.096580   18523 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:36.096832   18523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:36.096837   18523 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:36.096840   18523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:36.097023   18523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:36.097183   18523 out.go:352] Setting JSON to false
	I0819 11:36:36.097200   18523 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:36.097237   18523 notify.go:220] Checking for updates...
	I0819 11:36:36.097477   18523 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:36.097486   18523 status.go:255] checking status of ha-006000 ...
	I0819 11:36:36.097770   18523 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:36.097775   18523 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:36.097777   18523 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (76.008375ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:40.382562   18527 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:40.382753   18527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:40.382758   18527 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:40.382760   18527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:40.382916   18527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:40.383071   18527 out.go:352] Setting JSON to false
	I0819 11:36:40.383083   18527 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:40.383127   18527 notify.go:220] Checking for updates...
	I0819 11:36:40.383369   18527 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:40.383378   18527 status.go:255] checking status of ha-006000 ...
	I0819 11:36:40.383658   18527 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:40.383663   18527 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:40.383666   18527 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (73.638791ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:43.702491   18529 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:43.702700   18529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:43.702704   18529 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:43.702708   18529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:43.702876   18529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:43.703034   18529 out.go:352] Setting JSON to false
	I0819 11:36:43.703047   18529 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:43.703100   18529 notify.go:220] Checking for updates...
	I0819 11:36:43.703314   18529 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:43.703328   18529 status.go:255] checking status of ha-006000 ...
	I0819 11:36:43.703627   18529 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:43.703632   18529 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:43.703635   18529 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (72.788958ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:36:54.940539   18531 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:36:54.940727   18531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:54.940732   18531 out.go:358] Setting ErrFile to fd 2...
	I0819 11:36:54.940735   18531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:36:54.940897   18531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:36:54.941039   18531 out.go:352] Setting JSON to false
	I0819 11:36:54.941052   18531 mustload.go:65] Loading cluster: ha-006000
	I0819 11:36:54.941094   18531 notify.go:220] Checking for updates...
	I0819 11:36:54.941316   18531 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:36:54.941325   18531 status.go:255] checking status of ha-006000 ...
	I0819 11:36:54.941608   18531 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:36:54.941613   18531 status.go:343] host is not running, skipping remaining checks
	I0819 11:36:54.941616   18531 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (72.30075ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:06.761366   18533 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:06.761558   18533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:06.761562   18533 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:06.761565   18533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:06.761738   18533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:06.761879   18533 out.go:352] Setting JSON to false
	I0819 11:37:06.761891   18533 mustload.go:65] Loading cluster: ha-006000
	I0819 11:37:06.761928   18533 notify.go:220] Checking for updates...
	I0819 11:37:06.762143   18533 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:06.762157   18533 status.go:255] checking status of ha-006000 ...
	I0819 11:37:06.762442   18533 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:37:06.762446   18533 status.go:343] host is not running, skipping remaining checks
	I0819 11:37:06.762449   18533 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (70.608417ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:23.737584   18540 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:23.737784   18540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:23.737789   18540 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:23.737792   18540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:23.737957   18540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:23.738129   18540 out.go:352] Setting JSON to false
	I0819 11:37:23.738141   18540 mustload.go:65] Loading cluster: ha-006000
	I0819 11:37:23.738183   18540 notify.go:220] Checking for updates...
	I0819 11:37:23.738429   18540 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:23.738438   18540 status.go:255] checking status of ha-006000 ...
	I0819 11:37:23.738739   18540 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:37:23.738744   18540 status.go:343] host is not running, skipping remaining checks
	I0819 11:37:23.738747   18540 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (34.190917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-006000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-006000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (29.862167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-006000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-006000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-006000 -v=7 --alsologtostderr: (3.565086292s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229158916s)

                                                
                                                
-- stdout --
	* [ha-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:27.513482   18572 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:27.513654   18572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:27.513662   18572 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:27.513665   18572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:27.513833   18572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:27.515184   18572 out.go:352] Setting JSON to false
	I0819 11:37:27.535044   18572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7614,"bootTime":1724085033,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:37:27.535113   18572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:37:27.540230   18572 out.go:177] * [ha-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:37:27.547140   18572 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:37:27.547177   18572 notify.go:220] Checking for updates...
	I0819 11:37:27.554160   18572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:37:27.557192   18572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:37:27.560170   18572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:37:27.563157   18572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:37:27.566064   18572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:37:27.569456   18572 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:27.569521   18572 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:37:27.574148   18572 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:37:27.581131   18572 start.go:297] selected driver: qemu2
	I0819 11:37:27.581140   18572 start.go:901] validating driver "qemu2" against &{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:37:27.581201   18572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:37:27.583703   18572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:37:27.583734   18572 cni.go:84] Creating CNI manager for ""
	I0819 11:37:27.583746   18572 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:37:27.583809   18572 start.go:340] cluster config:
	{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:37:27.587600   18572 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:37:27.595076   18572 out.go:177] * Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	I0819 11:37:27.599122   18572 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:37:27.599140   18572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:37:27.599156   18572 cache.go:56] Caching tarball of preloaded images
	I0819 11:37:27.599222   18572 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:37:27.599228   18572 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:37:27.599313   18572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/ha-006000/config.json ...
	I0819 11:37:27.599788   18572 start.go:360] acquireMachinesLock for ha-006000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:37:27.599828   18572 start.go:364] duration metric: took 32.5µs to acquireMachinesLock for "ha-006000"
	I0819 11:37:27.599838   18572 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:37:27.599844   18572 fix.go:54] fixHost starting: 
	I0819 11:37:27.599976   18572 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W0819 11:37:27.599985   18572 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:37:27.608127   18572 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I0819 11:37:27.612142   18572 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:37:27.612183   18572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:22:df:dd:14:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:37:27.614514   18572 main.go:141] libmachine: STDOUT: 
	I0819 11:37:27.614537   18572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:37:27.614566   18572 fix.go:56] duration metric: took 14.7225ms for fixHost
	I0819 11:37:27.614571   18572 start.go:83] releasing machines lock for "ha-006000", held for 14.738542ms
	W0819 11:37:27.614578   18572 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:37:27.614610   18572 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:37:27.614615   18572 start.go:729] Will try again in 5 seconds ...
	I0819 11:37:32.616299   18572 start.go:360] acquireMachinesLock for ha-006000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:37:32.616697   18572 start.go:364] duration metric: took 298.417µs to acquireMachinesLock for "ha-006000"
	I0819 11:37:32.616834   18572 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:37:32.616855   18572 fix.go:54] fixHost starting: 
	I0819 11:37:32.617482   18572 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W0819 11:37:32.617507   18572 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:37:32.621975   18572 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I0819 11:37:32.629926   18572 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:37:32.630188   18572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:22:df:dd:14:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:37:32.639192   18572 main.go:141] libmachine: STDOUT: 
	I0819 11:37:32.639283   18572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:37:32.639373   18572 fix.go:56] duration metric: took 22.517958ms for fixHost
	I0819 11:37:32.639393   18572 start.go:83] releasing machines lock for "ha-006000", held for 22.672875ms
	W0819 11:37:32.639601   18572 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:37:32.646967   18572 out.go:201] 
	W0819 11:37:32.650972   18572 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:37:32.651016   18572 out.go:270] * 
	* 
	W0819 11:37:32.653282   18572 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:37:32.661896   18572 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-006000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-006000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (32.716458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.526292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-006000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:32.808939   18584 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:32.809370   18584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:32.809374   18584 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:32.809377   18584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:32.809529   18584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:32.809750   18584 mustload.go:65] Loading cluster: ha-006000
	I0819 11:37:32.809935   18584 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:32.813926   18584 out.go:177] * The control-plane node ha-006000 host is not running: state=Stopped
	I0819 11:37:32.817988   18584 out.go:177]   To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-006000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (30.277917ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:32.851341   18586 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:32.851470   18586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:32.851473   18586 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:32.851475   18586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:32.851606   18586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:32.851717   18586 out.go:352] Setting JSON to false
	I0819 11:37:32.851730   18586 mustload.go:65] Loading cluster: ha-006000
	I0819 11:37:32.851775   18586 notify.go:220] Checking for updates...
	I0819 11:37:32.851931   18586 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:32.851938   18586 status.go:255] checking status of ha-006000 ...
	I0819 11:37:32.852175   18586 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:37:32.852179   18586 status.go:343] host is not running, skipping remaining checks
	I0819 11:37:32.852181   18586 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.724125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-006000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (29.927584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-006000 stop -v=7 --alsologtostderr: (3.420662083s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr: exit status 7 (65.752375ms)

                                                
                                                
-- stdout --
	ha-006000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:36.445655   18615 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:36.445865   18615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:36.445870   18615 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:36.445873   18615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:36.446043   18615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:36.446205   18615 out.go:352] Setting JSON to false
	I0819 11:37:36.446221   18615 mustload.go:65] Loading cluster: ha-006000
	I0819 11:37:36.446255   18615 notify.go:220] Checking for updates...
	I0819 11:37:36.446468   18615 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:36.446481   18615 status.go:255] checking status of ha-006000 ...
	I0819 11:37:36.446779   18615 status.go:330] ha-006000 host status = "Stopped" (err=<nil>)
	I0819 11:37:36.446784   18615 status.go:343] host is not running, skipping remaining checks
	I0819 11:37:36.446787   18615 status.go:257] ha-006000 status: &{Name:ha-006000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-006000 status -v=7 --alsologtostderr": ha-006000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (34.024417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183029958s)

                                                
                                                
-- stdout --
	* [ha-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-006000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:36.510969   18619 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:36.511102   18619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:36.511105   18619 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:36.511107   18619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:36.511234   18619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:36.512411   18619 out.go:352] Setting JSON to false
	I0819 11:37:36.528849   18619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7623,"bootTime":1724085033,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:37:36.528914   18619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:37:36.532797   18619 out.go:177] * [ha-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:37:36.540806   18619 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:37:36.540855   18619 notify.go:220] Checking for updates...
	I0819 11:37:36.547717   18619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:37:36.550733   18619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:37:36.553755   18619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:37:36.556704   18619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:37:36.559747   18619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:37:36.562957   18619 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:36.563246   18619 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:37:36.567726   18619 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:37:36.573667   18619 start.go:297] selected driver: qemu2
	I0819 11:37:36.573673   18619 start.go:901] validating driver "qemu2" against &{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:37:36.573721   18619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:37:36.575922   18619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:37:36.575961   18619 cni.go:84] Creating CNI manager for ""
	I0819 11:37:36.575970   18619 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:37:36.576019   18619 start.go:340] cluster config:
	{Name:ha-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-006000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:37:36.579516   18619 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:37:36.586730   18619 out.go:177] * Starting "ha-006000" primary control-plane node in "ha-006000" cluster
	I0819 11:37:36.590628   18619 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:37:36.590646   18619 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:37:36.590652   18619 cache.go:56] Caching tarball of preloaded images
	I0819 11:37:36.590713   18619 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:37:36.590719   18619 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:37:36.590778   18619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/ha-006000/config.json ...
	I0819 11:37:36.591221   18619 start.go:360] acquireMachinesLock for ha-006000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:37:36.591252   18619 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "ha-006000"
	I0819 11:37:36.591262   18619 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:37:36.591266   18619 fix.go:54] fixHost starting: 
	I0819 11:37:36.591386   18619 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W0819 11:37:36.591396   18619 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:37:36.595693   18619 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I0819 11:37:36.603690   18619 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:37:36.603728   18619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:22:df:dd:14:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:37:36.605674   18619 main.go:141] libmachine: STDOUT: 
	I0819 11:37:36.605695   18619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:37:36.605717   18619 fix.go:56] duration metric: took 14.450708ms for fixHost
	I0819 11:37:36.605722   18619 start.go:83] releasing machines lock for "ha-006000", held for 14.465625ms
	W0819 11:37:36.605729   18619 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:37:36.605766   18619 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:37:36.605771   18619 start.go:729] Will try again in 5 seconds ...
	I0819 11:37:41.607924   18619 start.go:360] acquireMachinesLock for ha-006000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:37:41.608315   18619 start.go:364] duration metric: took 292.875µs to acquireMachinesLock for "ha-006000"
	I0819 11:37:41.608434   18619 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:37:41.608450   18619 fix.go:54] fixHost starting: 
	I0819 11:37:41.609139   18619 fix.go:112] recreateIfNeeded on ha-006000: state=Stopped err=<nil>
	W0819 11:37:41.609163   18619 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:37:41.613782   18619 out.go:177] * Restarting existing qemu2 VM for "ha-006000" ...
	I0819 11:37:41.622636   18619 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:37:41.622845   18619 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:22:df:dd:14:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/ha-006000/disk.qcow2
	I0819 11:37:41.630120   18619 main.go:141] libmachine: STDOUT: 
	I0819 11:37:41.630401   18619 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:37:41.630474   18619 fix.go:56] duration metric: took 22.021375ms for fixHost
	I0819 11:37:41.630492   18619 start.go:83] releasing machines lock for "ha-006000", held for 22.156583ms
	W0819 11:37:41.630689   18619 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-006000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:37:41.636560   18619 out.go:201] 
	W0819 11:37:41.640628   18619 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:37:41.640646   18619 out.go:270] * 
	* 
	W0819 11:37:41.642652   18619 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:37:41.651417   18619 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-006000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (67.263875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-006000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (29.763125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-006000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-006000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.732417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-006000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:37:41.844401   18634 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:37:41.844557   18634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:41.844560   18634 out.go:358] Setting ErrFile to fd 2...
	I0819 11:37:41.844562   18634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:37:41.844680   18634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:37:41.844914   18634 mustload.go:65] Loading cluster: ha-006000
	I0819 11:37:41.845132   18634 config.go:182] Loaded profile config "ha-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:37:41.848692   18634 out.go:177] * The control-plane node ha-006000 host is not running: state=Stopped
	I0819 11:37:41.852808   18634 out.go:177]   To start a cluster, run: "minikube start -p ha-006000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-006000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.919916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-006000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-006000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-006000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-006000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-006000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-006000 -n ha-006000: exit status 7 (30.733459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-006000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-848000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-848000 --driver=qemu2 : exit status 80 (9.794000958s)

                                                
                                                
-- stdout --
	* [image-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-848000" primary control-plane node in "image-848000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-848000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-848000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-848000 -n image-848000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-848000 -n image-848000: exit status 7 (68.351083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-848000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-716000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-716000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.822987s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"24e05617-b07c-4114-874e-3fad461c14bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-716000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"95787a18-53d6-4645-8e1d-2c123c6aae33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"e0ebe00e-8b82-4a08-b442-b48081486b33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig"}}
	{"specversion":"1.0","id":"a66c469e-3b8e-4ed5-9bcd-3a43e90f34a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3587ce1e-8e42-4b2e-93c5-ad29f9f8f5f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1f0e92ef-84b1-42ac-99e2-cffe06ae2a95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube"}}
	{"specversion":"1.0","id":"18112b0b-a1f8-47b1-93e0-638d1ca9a6ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea94e00a-340f-4b49-98fd-bcaa0fd49588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebd69601-f621-47f3-af28-2c319b866d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6d84be4a-6ca5-4d92-95b9-7dd327dead8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-716000\" primary control-plane node in \"json-output-716000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3357530-c744-4efb-bd81-170c900ce88a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cf09c606-6b7d-4312-9826-60ef9f6cffd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-716000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"68fd1633-7d43-4f8b-92b1-403a37ebe0cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f6490497-d279-472a-a878-d5d0cbd4bcc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d3d99a6b-d424-45d3-a685-15aa02293b06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-716000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d27889ef-8b6f-41c0-8547-d863a8c2c675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"fb08f72a-05b1-4d5b-9972-475bb3e4f2a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-716000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-716000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-716000 --output=json --user=testUser: exit status 83 (80.169958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f9bc9e6-27e0-4b49-90b8-58dd0fcd1494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-716000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"d0defa85-39f6-4202-a1f2-67a63c147196","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-716000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-716000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-716000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-716000 --output=json --user=testUser: exit status 83 (47.233042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-716000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-716000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-716000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-716000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-086000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-086000 --driver=qemu2 : exit status 80 (9.892206125s)

                                                
                                                
-- stdout --
	* [first-086000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-086000" primary control-plane node in "first-086000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-086000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-086000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-086000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 11:38:15.37787 -0700 PDT m=+419.561796460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-087000 -n second-087000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-087000 -n second-087000: exit status 85 (79.667542ms)

                                                
                                                
-- stdout --
	* Profile "second-087000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-087000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-087000" host is not running, skipping log retrieval (state="* Profile \"second-087000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-087000\"")
helpers_test.go:175: Cleaning up "second-087000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-087000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-19 11:38:15.564765 -0700 PDT m=+419.748692585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-086000 -n first-086000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-086000 -n first-086000: exit status 7 (30.5595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-086000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-086000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-086000
--- FAIL: TestMinikubeProfile (10.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-717000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-717000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.062944833s)

                                                
                                                
-- stdout --
	* [mount-start-1-717000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-717000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-717000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-717000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-717000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-717000 -n mount-start-1-717000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-717000 -n mount-start-1-717000: exit status 7 (71.33725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-717000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.882041166s)

                                                
                                                
-- stdout --
	* [multinode-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:38:26.022961   18776 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:38:26.023116   18776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:38:26.023119   18776 out.go:358] Setting ErrFile to fd 2...
	I0819 11:38:26.023124   18776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:38:26.023262   18776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:38:26.024410   18776 out.go:352] Setting JSON to false
	I0819 11:38:26.040894   18776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7673,"bootTime":1724085033,"procs":505,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:38:26.040978   18776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:38:26.046974   18776 out.go:177] * [multinode-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:38:26.054863   18776 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:38:26.054889   18776 notify.go:220] Checking for updates...
	I0819 11:38:26.060877   18776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:38:26.063834   18776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:38:26.066910   18776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:38:26.069811   18776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:38:26.072841   18776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:38:26.076085   18776 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:38:26.080778   18776 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:38:26.087800   18776 start.go:297] selected driver: qemu2
	I0819 11:38:26.087807   18776 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:38:26.087813   18776 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:38:26.090065   18776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:38:26.092884   18776 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:38:26.095896   18776 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:38:26.095927   18776 cni.go:84] Creating CNI manager for ""
	I0819 11:38:26.095932   18776 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0819 11:38:26.095936   18776 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:38:26.095958   18776 start.go:340] cluster config:
	{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:38:26.099678   18776 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:38:26.106846   18776 out.go:177] * Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	I0819 11:38:26.110866   18776 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:38:26.110884   18776 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:38:26.110900   18776 cache.go:56] Caching tarball of preloaded images
	I0819 11:38:26.110970   18776 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:38:26.110976   18776 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:38:26.111180   18776 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/multinode-587000/config.json ...
	I0819 11:38:26.111191   18776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/multinode-587000/config.json: {Name:mk010dd2d067aaa9d0550ab80a522b0d686c5cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:38:26.111470   18776 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:38:26.111504   18776 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "multinode-587000"
	I0819 11:38:26.111515   18776 start.go:93] Provisioning new machine with config: &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:38:26.111547   18776 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:38:26.118775   18776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:38:26.136320   18776 start.go:159] libmachine.API.Create for "multinode-587000" (driver="qemu2")
	I0819 11:38:26.136346   18776 client.go:168] LocalClient.Create starting
	I0819 11:38:26.136399   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:38:26.136429   18776 main.go:141] libmachine: Decoding PEM data...
	I0819 11:38:26.136439   18776 main.go:141] libmachine: Parsing certificate...
	I0819 11:38:26.136508   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:38:26.136533   18776 main.go:141] libmachine: Decoding PEM data...
	I0819 11:38:26.136543   18776 main.go:141] libmachine: Parsing certificate...
	I0819 11:38:26.136897   18776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:38:26.289647   18776 main.go:141] libmachine: Creating SSH key...
	I0819 11:38:26.410358   18776 main.go:141] libmachine: Creating Disk image...
	I0819 11:38:26.410367   18776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:38:26.410549   18776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:38:26.419824   18776 main.go:141] libmachine: STDOUT: 
	I0819 11:38:26.419843   18776 main.go:141] libmachine: STDERR: 
	I0819 11:38:26.419895   18776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2 +20000M
	I0819 11:38:26.427959   18776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:38:26.427974   18776 main.go:141] libmachine: STDERR: 
	I0819 11:38:26.427988   18776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:38:26.427993   18776 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:38:26.428005   18776 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:38:26.428046   18776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:83:3f:b9:4c:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:38:26.429627   18776 main.go:141] libmachine: STDOUT: 
	I0819 11:38:26.429644   18776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:38:26.429663   18776 client.go:171] duration metric: took 293.314166ms to LocalClient.Create
	I0819 11:38:28.431871   18776 start.go:128] duration metric: took 2.3203045s to createHost
	I0819 11:38:28.431971   18776 start.go:83] releasing machines lock for "multinode-587000", held for 2.320468375s
	W0819 11:38:28.432077   18776 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:38:28.448358   18776 out.go:177] * Deleting "multinode-587000" in qemu2 ...
	W0819 11:38:28.475665   18776 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:38:28.475744   18776 start.go:729] Will try again in 5 seconds ...
	I0819 11:38:33.477906   18776 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:38:33.478484   18776 start.go:364] duration metric: took 482.75µs to acquireMachinesLock for "multinode-587000"
	I0819 11:38:33.478674   18776 start.go:93] Provisioning new machine with config: &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:38:33.478974   18776 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:38:33.494411   18776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:38:33.545073   18776 start.go:159] libmachine.API.Create for "multinode-587000" (driver="qemu2")
	I0819 11:38:33.545120   18776 client.go:168] LocalClient.Create starting
	I0819 11:38:33.545245   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:38:33.545304   18776 main.go:141] libmachine: Decoding PEM data...
	I0819 11:38:33.545321   18776 main.go:141] libmachine: Parsing certificate...
	I0819 11:38:33.545383   18776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:38:33.545426   18776 main.go:141] libmachine: Decoding PEM data...
	I0819 11:38:33.545439   18776 main.go:141] libmachine: Parsing certificate...
	I0819 11:38:33.545984   18776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:38:33.724337   18776 main.go:141] libmachine: Creating SSH key...
	I0819 11:38:33.812422   18776 main.go:141] libmachine: Creating Disk image...
	I0819 11:38:33.812431   18776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:38:33.812602   18776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:38:33.821784   18776 main.go:141] libmachine: STDOUT: 
	I0819 11:38:33.821818   18776 main.go:141] libmachine: STDERR: 
	I0819 11:38:33.821874   18776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2 +20000M
	I0819 11:38:33.829911   18776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:38:33.829937   18776 main.go:141] libmachine: STDERR: 
	I0819 11:38:33.829948   18776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:38:33.829953   18776 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:38:33.829962   18776 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:38:33.829992   18776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:aa:7d:fe:4e:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:38:33.831574   18776 main.go:141] libmachine: STDOUT: 
	I0819 11:38:33.831599   18776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:38:33.831613   18776 client.go:171] duration metric: took 286.488ms to LocalClient.Create
	I0819 11:38:35.833769   18776 start.go:128] duration metric: took 2.354777334s to createHost
	I0819 11:38:35.833870   18776 start.go:83] releasing machines lock for "multinode-587000", held for 2.35536125s
	W0819 11:38:35.834204   18776 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:38:35.847922   18776 out.go:201] 
	W0819 11:38:35.852037   18776 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:38:35.852071   18776 out.go:270] * 
	* 
	W0819 11:38:35.854739   18776 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:38:35.861830   18776 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-587000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (68.132042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.440833ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-587000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- rollout status deployment/busybox: exit status 1 (57.923209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.619208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.593667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.047959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.421292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.842417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.901416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.317458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.924792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.386833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.797708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.031917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.94325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.817833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.664375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.070666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (31.230667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.923541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (31.08825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-587000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-587000 -v 3 --alsologtostderr: exit status 83 (42.029458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:35.188116   18863 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:35.188353   18863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.188357   18863 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:35.188359   18863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.188481   18863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:35.188726   18863 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:35.188905   18863 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:35.193269   18863 out.go:177] * The control-plane node multinode-587000 host is not running: state=Stopped
	I0819 11:40:35.197125   18863 out.go:177]   To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-587000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.793667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-587000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-587000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.575708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-587000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-587000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-587000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (31.027083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-587000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-587000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-587000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-587000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.5125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --output json --alsologtostderr: exit status 7 (30.194375ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-587000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:35.397056   18875 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:35.397213   18875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.397216   18875 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:35.397218   18875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.397348   18875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:35.397460   18875 out.go:352] Setting JSON to true
	I0819 11:40:35.397470   18875 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:35.397530   18875 notify.go:220] Checking for updates...
	I0819 11:40:35.397672   18875 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:35.397679   18875 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:35.397888   18875 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:35.397892   18875 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:35.397894   18875 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-587000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.684125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 node stop m03: exit status 85 (48.535584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-587000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status: exit status 7 (30.112791ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr: exit status 7 (30.544833ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:35.537714   18883 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:35.537866   18883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.537873   18883 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:35.537876   18883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.538017   18883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:35.538148   18883 out.go:352] Setting JSON to false
	I0819 11:40:35.538158   18883 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:35.538214   18883 notify.go:220] Checking for updates...
	I0819 11:40:35.538351   18883 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:35.538363   18883 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:35.538565   18883 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:35.538568   18883 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:35.538571   18883 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr": multinode-587000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.86075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.029334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:35.597843   18887 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:35.598574   18887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.598578   18887 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:35.598581   18887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.598734   18887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:35.598960   18887 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:35.599170   18887 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:35.603024   18887 out.go:201] 
	W0819 11:40:35.606816   18887 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0819 11:40:35.606821   18887 out.go:270] * 
	* 
	W0819 11:40:35.609100   18887 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:40:35.612840   18887 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0819 11:40:35.597843   18887 out.go:345] Setting OutFile to fd 1 ...
I0819 11:40:35.598574   18887 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:40:35.598578   18887 out.go:358] Setting ErrFile to fd 2...
I0819 11:40:35.598581   18887 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:40:35.598734   18887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
I0819 11:40:35.598960   18887 mustload.go:65] Loading cluster: multinode-587000
I0819 11:40:35.599170   18887 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0819 11:40:35.603024   18887 out.go:201] 
W0819 11:40:35.606816   18887 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0819 11:40:35.606821   18887 out.go:270] * 
* 
W0819 11:40:35.609100   18887 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0819 11:40:35.612840   18887 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-587000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (31.429833ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:35.646505   18889 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:35.646639   18889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.646642   18889 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:35.646645   18889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:35.646771   18889 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:35.646879   18889 out.go:352] Setting JSON to false
	I0819 11:40:35.646893   18889 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:35.646944   18889 notify.go:220] Checking for updates...
	I0819 11:40:35.647081   18889 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:35.647088   18889 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:35.647292   18889 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:35.647296   18889 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:35.647299   18889 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (75.468ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:36.299328   18891 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:36.299531   18891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:36.299536   18891 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:36.299539   18891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:36.299717   18891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:36.299914   18891 out.go:352] Setting JSON to false
	I0819 11:40:36.299929   18891 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:36.299960   18891 notify.go:220] Checking for updates...
	I0819 11:40:36.300182   18891 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:36.300192   18891 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:36.300475   18891 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:36.300481   18891 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:36.300483   18891 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (75.40775ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:38.007423   18893 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:38.007638   18893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:38.007643   18893 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:38.007646   18893 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:38.007803   18893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:38.007960   18893 out.go:352] Setting JSON to false
	I0819 11:40:38.007975   18893 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:38.008015   18893 notify.go:220] Checking for updates...
	I0819 11:40:38.008272   18893 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:38.008284   18893 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:38.008580   18893 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:38.008585   18893 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:38.008588   18893 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.309584ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:40.468746   18895 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:40.468963   18895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:40.468967   18895 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:40.468970   18895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:40.469126   18895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:40.469290   18895 out.go:352] Setting JSON to false
	I0819 11:40:40.469304   18895 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:40.469337   18895 notify.go:220] Checking for updates...
	I0819 11:40:40.469537   18895 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:40.469546   18895 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:40.469818   18895 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:40.469824   18895 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:40.469827   18895 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (74.619292ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:45.267717   18897 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:45.267907   18897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:45.267912   18897 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:45.267915   18897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:45.268075   18897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:45.268246   18897 out.go:352] Setting JSON to false
	I0819 11:40:45.268259   18897 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:45.268300   18897 notify.go:220] Checking for updates...
	I0819 11:40:45.268516   18897 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:45.268525   18897 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:45.268806   18897 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:45.268811   18897 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:45.268813   18897 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (74.522166ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:49.966469   18903 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:49.966654   18903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:49.966658   18903 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:49.966662   18903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:49.966851   18903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:49.967018   18903 out.go:352] Setting JSON to false
	I0819 11:40:49.967035   18903 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:49.967070   18903 notify.go:220] Checking for updates...
	I0819 11:40:49.967305   18903 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:49.967314   18903 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:49.967588   18903 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:49.967593   18903 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:49.967596   18903 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (75.051583ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:40:53.915043   18905 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:40:53.915266   18905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:53.915270   18905 out.go:358] Setting ErrFile to fd 2...
	I0819 11:40:53.915273   18905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:40:53.915442   18905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:40:53.915593   18905 out.go:352] Setting JSON to false
	I0819 11:40:53.915606   18905 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:40:53.915648   18905 notify.go:220] Checking for updates...
	I0819 11:40:53.915884   18905 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:40:53.915894   18905 status.go:255] checking status of multinode-587000 ...
	I0819 11:40:53.916187   18905 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:40:53.916192   18905 status.go:343] host is not running, skipping remaining checks
	I0819 11:40:53.916196   18905 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.174ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:08.141150   18907 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:08.141348   18907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:08.141353   18907 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:08.141355   18907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:08.141529   18907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:08.141705   18907 out.go:352] Setting JSON to false
	I0819 11:41:08.141720   18907 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:41:08.141754   18907 notify.go:220] Checking for updates...
	I0819 11:41:08.142025   18907 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:08.142034   18907 status.go:255] checking status of multinode-587000 ...
	I0819 11:41:08.142338   18907 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:41:08.142343   18907 status.go:343] host is not running, skipping remaining checks
	I0819 11:41:08.142346   18907 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.378333ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:19.241544   18909 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:19.241772   18909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:19.241777   18909 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:19.241780   18909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:19.241971   18909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:19.242129   18909 out.go:352] Setting JSON to false
	I0819 11:41:19.242149   18909 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:41:19.242181   18909 notify.go:220] Checking for updates...
	I0819 11:41:19.242382   18909 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:19.242393   18909 status.go:255] checking status of multinode-587000 ...
	I0819 11:41:19.242681   18909 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:41:19.242686   18909 status.go:343] host is not running, skipping remaining checks
	I0819 11:41:19.242689   18909 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.56175ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:32.483952   18914 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:32.484151   18914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:32.484156   18914 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:32.484159   18914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:32.484311   18914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:32.484461   18914 out.go:352] Setting JSON to false
	I0819 11:41:32.484473   18914 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:41:32.484507   18914 notify.go:220] Checking for updates...
	I0819 11:41:32.484729   18914 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:32.484738   18914 status.go:255] checking status of multinode-587000 ...
	I0819 11:41:32.485014   18914 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:41:32.485019   18914 status.go:343] host is not running, skipping remaining checks
	I0819 11:41:32.485022   18914 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (34.121917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-587000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-587000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-587000: (2.018561333s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.218887666s)

                                                
                                                
-- stdout --
	* [multinode-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:34.631345   18932 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:34.631485   18932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:34.631490   18932 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:34.631493   18932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:34.631650   18932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:34.632806   18932 out.go:352] Setting JSON to false
	I0819 11:41:34.651923   18932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7861,"bootTime":1724085033,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:41:34.651994   18932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:41:34.656872   18932 out.go:177] * [multinode-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:41:34.663761   18932 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:41:34.663797   18932 notify.go:220] Checking for updates...
	I0819 11:41:34.670755   18932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:41:34.673759   18932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:41:34.676784   18932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:41:34.679795   18932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:41:34.682772   18932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:41:34.686144   18932 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:34.686202   18932 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:41:34.690759   18932 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:41:34.697752   18932 start.go:297] selected driver: qemu2
	I0819 11:41:34.697760   18932 start.go:901] validating driver "qemu2" against &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:41:34.697805   18932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:41:34.700010   18932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:41:34.700040   18932 cni.go:84] Creating CNI manager for ""
	I0819 11:41:34.700045   18932 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:41:34.700099   18932 start.go:340] cluster config:
	{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:41:34.703860   18932 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:41:34.709730   18932 out.go:177] * Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	I0819 11:41:34.713653   18932 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:41:34.713668   18932 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:41:34.713674   18932 cache.go:56] Caching tarball of preloaded images
	I0819 11:41:34.713743   18932 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:41:34.713749   18932 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:41:34.713809   18932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/multinode-587000/config.json ...
	I0819 11:41:34.714227   18932 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:41:34.714265   18932 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "multinode-587000"
	I0819 11:41:34.714275   18932 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:41:34.714279   18932 fix.go:54] fixHost starting: 
	I0819 11:41:34.714410   18932 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0819 11:41:34.714418   18932 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:41:34.718716   18932 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0819 11:41:34.726534   18932 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:41:34.726568   18932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:aa:7d:fe:4e:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:41:34.728645   18932 main.go:141] libmachine: STDOUT: 
	I0819 11:41:34.728667   18932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:41:34.728693   18932 fix.go:56] duration metric: took 14.413625ms for fixHost
	I0819 11:41:34.728698   18932 start.go:83] releasing machines lock for "multinode-587000", held for 14.428875ms
	W0819 11:41:34.728706   18932 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:41:34.728739   18932 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:41:34.728745   18932 start.go:729] Will try again in 5 seconds ...
	I0819 11:41:39.730927   18932 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:41:39.731322   18932 start.go:364] duration metric: took 322.958µs to acquireMachinesLock for "multinode-587000"
	I0819 11:41:39.731453   18932 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:41:39.731473   18932 fix.go:54] fixHost starting: 
	I0819 11:41:39.732131   18932 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0819 11:41:39.732158   18932 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:41:39.740588   18932 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0819 11:41:39.744517   18932 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:41:39.744682   18932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:aa:7d:fe:4e:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:41:39.753658   18932 main.go:141] libmachine: STDOUT: 
	I0819 11:41:39.753730   18932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:41:39.753797   18932 fix.go:56] duration metric: took 22.321458ms for fixHost
	I0819 11:41:39.753815   18932 start.go:83] releasing machines lock for "multinode-587000", held for 22.469375ms
	W0819 11:41:39.753981   18932 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:41:39.762480   18932 out.go:201] 
	W0819 11:41:39.766564   18932 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:41:39.766592   18932 out.go:270] * 
	* 
	W0819 11:41:39.769354   18932 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:41:39.776495   18932 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-587000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-587000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (33.096291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 node delete m03: exit status 83 (41.915666ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-587000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr: exit status 7 (29.634667ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:39.963107   18946 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:39.963251   18946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:39.963254   18946 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:39.963257   18946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:39.963375   18946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:39.963488   18946 out.go:352] Setting JSON to false
	I0819 11:41:39.963497   18946 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:41:39.963554   18946 notify.go:220] Checking for updates...
	I0819 11:41:39.963722   18946 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:39.963736   18946 status.go:255] checking status of multinode-587000 ...
	I0819 11:41:39.963943   18946 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:41:39.963947   18946 status.go:343] host is not running, skipping remaining checks
	I0819 11:41:39.963949   18946 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.327333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-587000 stop: (2.90700025s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status: exit status 7 (62.1585ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr: exit status 7 (32.623417ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:42.995796   18970 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:42.995932   18970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:42.995935   18970 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:42.995938   18970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:42.996070   18970 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:42.996178   18970 out.go:352] Setting JSON to false
	I0819 11:41:42.996188   18970 mustload.go:65] Loading cluster: multinode-587000
	I0819 11:41:42.996241   18970 notify.go:220] Checking for updates...
	I0819 11:41:42.996378   18970 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:42.996386   18970 status.go:255] checking status of multinode-587000 ...
	I0819 11:41:42.996592   18970 status.go:330] multinode-587000 host status = "Stopped" (err=<nil>)
	I0819 11:41:42.996596   18970 status.go:343] host is not running, skipping remaining checks
	I0819 11:41:42.996599   18970 status.go:257] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr": multinode-587000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr": multinode-587000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.446708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183187083s)

                                                
                                                
-- stdout --
	* [multinode-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:41:43.056722   18974 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:41:43.056852   18974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:43.056856   18974 out.go:358] Setting ErrFile to fd 2...
	I0819 11:41:43.056859   18974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:41:43.056991   18974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:41:43.058038   18974 out.go:352] Setting JSON to false
	I0819 11:41:43.074490   18974 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7870,"bootTime":1724085033,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:41:43.074561   18974 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:41:43.079756   18974 out.go:177] * [multinode-587000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:41:43.086746   18974 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:41:43.086820   18974 notify.go:220] Checking for updates...
	I0819 11:41:43.093648   18974 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:41:43.096628   18974 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:41:43.099666   18974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:41:43.102693   18974 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:41:43.105652   18974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:41:43.109004   18974 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:41:43.109244   18974 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:41:43.113693   18974 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:41:43.120662   18974 start.go:297] selected driver: qemu2
	I0819 11:41:43.120668   18974 start.go:901] validating driver "qemu2" against &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:41:43.120719   18974 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:41:43.122948   18974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:41:43.122993   18974 cni.go:84] Creating CNI manager for ""
	I0819 11:41:43.122999   18974 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0819 11:41:43.123045   18974 start.go:340] cluster config:
	{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-587000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:41:43.126530   18974 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:41:43.132674   18974 out.go:177] * Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	I0819 11:41:43.136566   18974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:41:43.136578   18974 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:41:43.136584   18974 cache.go:56] Caching tarball of preloaded images
	I0819 11:41:43.136632   18974 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:41:43.136637   18974 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:41:43.136684   18974 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/multinode-587000/config.json ...
	I0819 11:41:43.137002   18974 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:41:43.137030   18974 start.go:364] duration metric: took 21.417µs to acquireMachinesLock for "multinode-587000"
	I0819 11:41:43.137039   18974 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:41:43.137043   18974 fix.go:54] fixHost starting: 
	I0819 11:41:43.137159   18974 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0819 11:41:43.137166   18974 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:41:43.144599   18974 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0819 11:41:43.148550   18974 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:41:43.148586   18974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:aa:7d:fe:4e:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:41:43.150794   18974 main.go:141] libmachine: STDOUT: 
	I0819 11:41:43.150814   18974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:41:43.150839   18974 fix.go:56] duration metric: took 13.794542ms for fixHost
	I0819 11:41:43.150855   18974 start.go:83] releasing machines lock for "multinode-587000", held for 13.820792ms
	W0819 11:41:43.150861   18974 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:41:43.150890   18974 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:41:43.150894   18974 start.go:729] Will try again in 5 seconds ...
	I0819 11:41:48.153084   18974 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:41:48.153524   18974 start.go:364] duration metric: took 329.5µs to acquireMachinesLock for "multinode-587000"
	I0819 11:41:48.153647   18974 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:41:48.153664   18974 fix.go:54] fixHost starting: 
	I0819 11:41:48.154332   18974 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0819 11:41:48.154363   18974 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:41:48.161857   18974 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0819 11:41:48.165869   18974 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:41:48.166104   18974 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:aa:7d:fe:4e:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/multinode-587000/disk.qcow2
	I0819 11:41:48.175413   18974 main.go:141] libmachine: STDOUT: 
	I0819 11:41:48.175489   18974 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:41:48.175581   18974 fix.go:56] duration metric: took 21.912333ms for fixHost
	I0819 11:41:48.175604   18974 start.go:83] releasing machines lock for "multinode-587000", held for 22.061292ms
	W0819 11:41:48.175820   18974 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:41:48.183923   18974 out.go:201] 
	W0819 11:41:48.187942   18974 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:41:48.187979   18974 out.go:270] * 
	* 
	W0819 11:41:48.190923   18974 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:41:48.197820   18974 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (68.290375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-587000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000-m01 --driver=qemu2 : exit status 80 (10.47709025s)

                                                
                                                
-- stdout --
	* [multinode-587000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-587000-m01" primary control-plane node in "multinode-587000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-587000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000-m02 --driver=qemu2 : exit status 80 (11.124803042s)

                                                
                                                
-- stdout --
	* [multinode-587000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-587000-m02" primary control-plane node in "multinode-587000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-587000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-587000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-587000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-587000: exit status 83 (79.4975ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-587000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.768708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (21.83s)

                                                
                                    
x
+
TestPreload (10.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-880000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-880000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.25984375s)

                                                
                                                
-- stdout --
	* [test-preload-880000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-880000" primary control-plane node in "test-preload-880000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-880000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:42:10.254187   19030 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:42:10.254312   19030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:42:10.254315   19030 out.go:358] Setting ErrFile to fd 2...
	I0819 11:42:10.254317   19030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:42:10.254430   19030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:42:10.255453   19030 out.go:352] Setting JSON to false
	I0819 11:42:10.271684   19030 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7897,"bootTime":1724085033,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:42:10.271756   19030 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:42:10.278786   19030 out.go:177] * [test-preload-880000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:42:10.285800   19030 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:42:10.285832   19030 notify.go:220] Checking for updates...
	I0819 11:42:10.292790   19030 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:42:10.295649   19030 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:42:10.298772   19030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:42:10.301791   19030 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:42:10.304671   19030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:42:10.308081   19030 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:42:10.308131   19030 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:42:10.312731   19030 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:42:10.319796   19030 start.go:297] selected driver: qemu2
	I0819 11:42:10.319806   19030 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:42:10.319812   19030 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:42:10.322225   19030 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:42:10.326761   19030 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:42:10.329801   19030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:42:10.329845   19030 cni.go:84] Creating CNI manager for ""
	I0819 11:42:10.329854   19030 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:42:10.329859   19030 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:42:10.329890   19030 start.go:340] cluster config:
	{Name:test-preload-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:42:10.333756   19030 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.340709   19030 out.go:177] * Starting "test-preload-880000" primary control-plane node in "test-preload-880000" cluster
	I0819 11:42:10.344709   19030 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0819 11:42:10.344804   19030 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/test-preload-880000/config.json ...
	I0819 11:42:10.344802   19030 cache.go:107] acquiring lock: {Name:mk431ccdb49bd0ebf21fd0eeca08dfa0c11b0f0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.344804   19030 cache.go:107] acquiring lock: {Name:mk0cf812d02e6060697a7d6c730952151e99c192 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.344819   19030 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/test-preload-880000/config.json: {Name:mk3c00e48fe33f93dceefa43536e608ab0806649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:42:10.344811   19030 cache.go:107] acquiring lock: {Name:mkc85eab632129f3565823971eed5ac5296152e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.345005   19030 cache.go:107] acquiring lock: {Name:mk6191b9503ea9b123429e75c5952929b87be3e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.345014   19030 cache.go:107] acquiring lock: {Name:mk740ddb2dbe56e73177dbf2acb794b9c9f19ad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.345052   19030 cache.go:107] acquiring lock: {Name:mk86d0e6f21f510940a3d2de9601a774cca47c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.345082   19030 cache.go:107] acquiring lock: {Name:mkb37346f2a7c7ad08f831c90175124e6e740e02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.345181   19030 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 11:42:10.345182   19030 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 11:42:10.345253   19030 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:42:10.345331   19030 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 11:42:10.345351   19030 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:42:10.345355   19030 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 11:42:10.345051   19030 cache.go:107] acquiring lock: {Name:mk8b7f844b25a55ee7c79736add5ba072d86dd39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:42:10.345457   19030 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:42:10.345369   19030 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:42:10.345363   19030 start.go:360] acquireMachinesLock for test-preload-880000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:42:10.345532   19030 start.go:364] duration metric: took 33.5µs to acquireMachinesLock for "test-preload-880000"
	I0819 11:42:10.345548   19030 start.go:93] Provisioning new machine with config: &{Name:test-preload-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:42:10.345603   19030 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:42:10.353752   19030 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:42:10.358390   19030 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0819 11:42:10.358406   19030 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:42:10.358519   19030 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0819 11:42:10.359164   19030 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:42:10.359289   19030 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:42:10.359696   19030 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0819 11:42:10.360240   19030 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0819 11:42:10.360568   19030 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:42:10.372357   19030 start.go:159] libmachine.API.Create for "test-preload-880000" (driver="qemu2")
	I0819 11:42:10.372381   19030 client.go:168] LocalClient.Create starting
	I0819 11:42:10.372532   19030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:42:10.372569   19030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:10.372577   19030 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:10.372618   19030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:42:10.372650   19030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:10.372659   19030 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:10.373127   19030 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:42:10.711804   19030 main.go:141] libmachine: Creating SSH key...
	I0819 11:42:10.735112   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0819 11:42:10.743691   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:42:10.754954   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0819 11:42:10.775204   19030 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:42:10.775225   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:42:10.788242   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0819 11:42:10.884304   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0819 11:42:10.893674   19030 main.go:141] libmachine: Creating Disk image...
	I0819 11:42:10.893680   19030 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:42:10.893872   19030 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2
	I0819 11:42:10.906368   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:42:10.907738   19030 main.go:141] libmachine: STDOUT: 
	I0819 11:42:10.907745   19030 main.go:141] libmachine: STDERR: 
	I0819 11:42:10.907783   19030 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2 +20000M
	I0819 11:42:10.916139   19030 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:42:10.916153   19030 main.go:141] libmachine: STDERR: 
	I0819 11:42:10.916164   19030 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2
	I0819 11:42:10.916167   19030 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:42:10.916185   19030 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:42:10.916211   19030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:43:25:62:60:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2
	I0819 11:42:10.917942   19030 main.go:141] libmachine: STDOUT: 
	I0819 11:42:10.917960   19030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:42:10.917978   19030 client.go:171] duration metric: took 545.596292ms to LocalClient.Create
	I0819 11:42:11.057668   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0819 11:42:11.057687   19030 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 712.72675ms
	I0819 11:42:11.057706   19030 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0819 11:42:11.192467   19030 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:42:11.192586   19030 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:42:11.458224   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 11:42:11.458273   19030 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.11347575s
	I0819 11:42:11.458300   19030 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 11:42:12.918210   19030 start.go:128] duration metric: took 2.572586042s to createHost
	I0819 11:42:12.918262   19030 start.go:83] releasing machines lock for "test-preload-880000", held for 2.572732292s
	W0819 11:42:12.918359   19030 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:12.936509   19030 out.go:177] * Deleting "test-preload-880000" in qemu2 ...
	W0819 11:42:12.969631   19030 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:12.969658   19030 start.go:729] Will try again in 5 seconds ...
	I0819 11:42:13.236858   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0819 11:42:13.236929   19030 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.891985834s
	I0819 11:42:13.236976   19030 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0819 11:42:13.455864   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0819 11:42:13.455920   19030 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.110849292s
	I0819 11:42:13.455949   19030 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0819 11:42:15.807815   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0819 11:42:15.807888   19030 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.463111042s
	I0819 11:42:15.807917   19030 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0819 11:42:16.117335   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0819 11:42:16.117382   19030 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.772414s
	I0819 11:42:16.117414   19030 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0819 11:42:16.362391   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0819 11:42:16.362442   19030 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.017654042s
	I0819 11:42:16.362469   19030 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0819 11:42:17.969862   19030 start.go:360] acquireMachinesLock for test-preload-880000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:42:17.970329   19030 start.go:364] duration metric: took 364.417µs to acquireMachinesLock for "test-preload-880000"
	I0819 11:42:17.970455   19030 start.go:93] Provisioning new machine with config: &{Name:test-preload-880000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-880000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:42:17.970700   19030 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:42:17.981359   19030 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:42:18.033208   19030 start.go:159] libmachine.API.Create for "test-preload-880000" (driver="qemu2")
	I0819 11:42:18.033263   19030 client.go:168] LocalClient.Create starting
	I0819 11:42:18.033397   19030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:42:18.033463   19030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:18.033487   19030 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:18.033552   19030 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:42:18.033598   19030 main.go:141] libmachine: Decoding PEM data...
	I0819 11:42:18.033615   19030 main.go:141] libmachine: Parsing certificate...
	I0819 11:42:18.034145   19030 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:42:18.365727   19030 main.go:141] libmachine: Creating SSH key...
	I0819 11:42:18.421033   19030 main.go:141] libmachine: Creating Disk image...
	I0819 11:42:18.421038   19030 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:42:18.421227   19030 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2
	I0819 11:42:18.430717   19030 main.go:141] libmachine: STDOUT: 
	I0819 11:42:18.430742   19030 main.go:141] libmachine: STDERR: 
	I0819 11:42:18.430799   19030 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2 +20000M
	I0819 11:42:18.438689   19030 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:42:18.438705   19030 main.go:141] libmachine: STDERR: 
	I0819 11:42:18.438725   19030 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2
	I0819 11:42:18.438730   19030 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:42:18.438743   19030 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:42:18.438775   19030 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:7b:f8:2f:07:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/test-preload-880000/disk.qcow2
	I0819 11:42:18.440400   19030 main.go:141] libmachine: STDOUT: 
	I0819 11:42:18.440417   19030 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:42:18.440434   19030 client.go:171] duration metric: took 407.168041ms to LocalClient.Create
	I0819 11:42:19.403056   19030 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0819 11:42:19.403121   19030 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.058197667s
	I0819 11:42:19.403147   19030 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0819 11:42:19.403205   19030 cache.go:87] Successfully saved all images to host disk.
	I0819 11:42:20.442611   19030 start.go:128] duration metric: took 2.471862208s to createHost
	I0819 11:42:20.442660   19030 start.go:83] releasing machines lock for "test-preload-880000", held for 2.472313792s
	W0819 11:42:20.443038   19030 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-880000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:42:20.454521   19030 out.go:201] 
	W0819 11:42:20.458588   19030 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:42:20.458639   19030 out.go:270] * 
	* 
	W0819 11:42:20.461112   19030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:42:20.469588   19030 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-880000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-19 11:42:20.488015 -0700 PDT m=+664.673107001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-880000 -n test-preload-880000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-880000 -n test-preload-880000: exit status 7 (66.506708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-880000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-880000
--- FAIL: TestPreload (10.41s)

                                                
                                    
x
+
TestScheduledStopUnix (10.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-712000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-712000 --memory=2048 --driver=qemu2 : exit status 80 (10.069462709s)

                                                
                                                
-- stdout --
	* [scheduled-stop-712000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-712000" primary control-plane node in "scheduled-stop-712000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-712000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-712000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-712000" primary control-plane node in "scheduled-stop-712000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-712000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-712000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-19 11:42:30.703855 -0700 PDT m=+674.888994835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-712000 -n scheduled-stop-712000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-712000 -n scheduled-stop-712000: exit status 7 (70.114125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-712000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-712000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-712000
--- FAIL: TestScheduledStopUnix (10.22s)

                                                
                                    
x
+
TestSkaffold (13.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2864837818 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2864837818 version: (1.059998625s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-170000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-170000 --memory=2600 --driver=qemu2 : exit status 80 (10.003330458s)

                                                
                                                
-- stdout --
	* [skaffold-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-170000" primary control-plane node in "skaffold-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-170000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-170000" primary control-plane node in "skaffold-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-19 11:42:44.110654 -0700 PDT m=+688.295857710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-170000 -n skaffold-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-170000 -n skaffold-170000: exit status 7 (64.76925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-170000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-170000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-170000
--- FAIL: TestSkaffold (13.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (613.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1607737064 start -p running-upgrade-409000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1607737064 start -p running-upgrade-409000 --memory=2200 --vm-driver=qemu2 : (1m6.256854958s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-409000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-409000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.492271083s)

                                                
                                                
-- stdout --
	* [running-upgrade-409000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-409000" primary control-plane node in "running-upgrade-409000" cluster
	* Updating the running qemu2 "running-upgrade-409000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:44:33.646882   19417 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:44:33.646993   19417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:33.646995   19417 out.go:358] Setting ErrFile to fd 2...
	I0819 11:44:33.646998   19417 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:33.647122   19417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:44:33.648240   19417 out.go:352] Setting JSON to false
	I0819 11:44:33.665511   19417 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8040,"bootTime":1724085033,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:44:33.665588   19417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:44:33.670201   19417 out.go:177] * [running-upgrade-409000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:44:33.678107   19417 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:44:33.678175   19417 notify.go:220] Checking for updates...
	I0819 11:44:33.686116   19417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:44:33.690036   19417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:44:33.693083   19417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:44:33.696085   19417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:44:33.699037   19417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:44:33.700594   19417 config.go:182] Loaded profile config "running-upgrade-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:44:33.704080   19417 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:44:33.707065   19417 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:44:33.710914   19417 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:44:33.718084   19417 start.go:297] selected driver: qemu2
	I0819 11:44:33.718089   19417 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53137 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:44:33.718137   19417 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:44:33.720676   19417 cni.go:84] Creating CNI manager for ""
	I0819 11:44:33.720696   19417 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:44:33.720720   19417 start.go:340] cluster config:
	{Name:running-upgrade-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53137 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:44:33.720769   19417 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:44:33.728015   19417 out.go:177] * Starting "running-upgrade-409000" primary control-plane node in "running-upgrade-409000" cluster
	I0819 11:44:33.732003   19417 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:44:33.732025   19417 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 11:44:33.732032   19417 cache.go:56] Caching tarball of preloaded images
	I0819 11:44:33.732098   19417 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:44:33.732103   19417 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 11:44:33.732156   19417 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/config.json ...
	I0819 11:44:33.732583   19417 start.go:360] acquireMachinesLock for running-upgrade-409000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:44:33.732618   19417 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "running-upgrade-409000"
	I0819 11:44:33.732628   19417 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:44:33.732632   19417 fix.go:54] fixHost starting: 
	I0819 11:44:33.733237   19417 fix.go:112] recreateIfNeeded on running-upgrade-409000: state=Running err=<nil>
	W0819 11:44:33.733245   19417 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:44:33.741072   19417 out.go:177] * Updating the running qemu2 "running-upgrade-409000" VM ...
	I0819 11:44:33.744960   19417 machine.go:93] provisionDockerMachine start ...
	I0819 11:44:33.744992   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:33.745089   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:33.745093   19417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:44:33.794343   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-409000
	
	I0819 11:44:33.794357   19417 buildroot.go:166] provisioning hostname "running-upgrade-409000"
	I0819 11:44:33.794399   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:33.794507   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:33.794513   19417 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-409000 && echo "running-upgrade-409000" | sudo tee /etc/hostname
	I0819 11:44:33.849170   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-409000
	
	I0819 11:44:33.849217   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:33.849335   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:33.849344   19417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-409000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-409000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-409000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:44:33.898225   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:44:33.898239   19417 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-17178/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-17178/.minikube}
	I0819 11:44:33.898248   19417 buildroot.go:174] setting up certificates
	I0819 11:44:33.898253   19417 provision.go:84] configureAuth start
	I0819 11:44:33.898259   19417 provision.go:143] copyHostCerts
	I0819 11:44:33.898332   19417 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem, removing ...
	I0819 11:44:33.898337   19417 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem
	I0819 11:44:33.898455   19417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem (1082 bytes)
	I0819 11:44:33.898634   19417 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem, removing ...
	I0819 11:44:33.898637   19417 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem
	I0819 11:44:33.898722   19417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem (1123 bytes)
	I0819 11:44:33.898839   19417 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem, removing ...
	I0819 11:44:33.898842   19417 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem
	I0819 11:44:33.898892   19417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem (1679 bytes)
	I0819 11:44:33.898977   19417 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-409000 san=[127.0.0.1 localhost minikube running-upgrade-409000]
	I0819 11:44:33.971594   19417 provision.go:177] copyRemoteCerts
	I0819 11:44:33.971621   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:44:33.971628   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:44:34.001147   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:44:34.008607   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:44:34.015456   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 11:44:34.022002   19417 provision.go:87] duration metric: took 123.743959ms to configureAuth
	I0819 11:44:34.022016   19417 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:44:34.022115   19417 config.go:182] Loaded profile config "running-upgrade-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:44:34.022145   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:34.022228   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:34.022235   19417 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 11:44:34.073727   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 11:44:34.073739   19417 buildroot.go:70] root file system type: tmpfs
	I0819 11:44:34.073786   19417 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 11:44:34.073856   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:34.073978   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:34.074010   19417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 11:44:34.128822   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 11:44:34.128875   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:34.128983   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:34.128992   19417 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 11:44:34.181406   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:44:34.181415   19417 machine.go:96] duration metric: took 436.45175ms to provisionDockerMachine
	I0819 11:44:34.181420   19417 start.go:293] postStartSetup for "running-upgrade-409000" (driver="qemu2")
	I0819 11:44:34.181426   19417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:44:34.181471   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:44:34.181482   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:44:34.210280   19417 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:44:34.211552   19417 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 11:44:34.211558   19417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-17178/.minikube/addons for local assets ...
	I0819 11:44:34.211643   19417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-17178/.minikube/files for local assets ...
	I0819 11:44:34.211757   19417 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem -> 176542.pem in /etc/ssl/certs
	I0819 11:44:34.211893   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:44:34.214367   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem --> /etc/ssl/certs/176542.pem (1708 bytes)
	I0819 11:44:34.221836   19417 start.go:296] duration metric: took 40.4115ms for postStartSetup
	I0819 11:44:34.221849   19417 fix.go:56] duration metric: took 489.219292ms for fixHost
	I0819 11:44:34.221884   19417 main.go:141] libmachine: Using SSH client type: native
	I0819 11:44:34.221984   19417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1007185a0] 0x10071ae00 <nil>  [] 0s} localhost 53105 <nil> <nil>}
	I0819 11:44:34.221989   19417 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:44:34.275280   19417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093074.449490272
	
	I0819 11:44:34.275290   19417 fix.go:216] guest clock: 1724093074.449490272
	I0819 11:44:34.275294   19417 fix.go:229] Guest: 2024-08-19 11:44:34.449490272 -0700 PDT Remote: 2024-08-19 11:44:34.221853 -0700 PDT m=+0.594533126 (delta=227.637272ms)
	I0819 11:44:34.275305   19417 fix.go:200] guest clock delta is within tolerance: 227.637272ms
	I0819 11:44:34.275308   19417 start.go:83] releasing machines lock for "running-upgrade-409000", held for 542.688ms
	I0819 11:44:34.275366   19417 ssh_runner.go:195] Run: cat /version.json
	I0819 11:44:34.275373   19417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:44:34.275376   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:44:34.275387   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	W0819 11:44:34.276120   19417 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:53219->127.0.0.1:53105: write: broken pipe
	I0819 11:44:34.276137   19417 retry.go:31] will retry after 243.338201ms: ssh: handshake failed: write tcp 127.0.0.1:53219->127.0.0.1:53105: write: broken pipe
	W0819 11:44:34.300972   19417 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 11:44:34.301021   19417 ssh_runner.go:195] Run: systemctl --version
	I0819 11:44:34.302810   19417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:44:34.304456   19417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:44:34.304480   19417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 11:44:34.309811   19417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 11:44:34.314119   19417 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:44:34.314127   19417 start.go:495] detecting cgroup driver to use...
	I0819 11:44:34.314219   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:44:34.319923   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 11:44:34.323256   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:44:34.326138   19417 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:44:34.326159   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:44:34.328895   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:44:34.331777   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:44:34.334884   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:44:34.337812   19417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:44:34.340937   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:44:34.344228   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:44:34.348631   19417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:44:34.352001   19417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:44:34.354555   19417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:44:34.357436   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:44:34.431368   19417 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:44:34.438437   19417 start.go:495] detecting cgroup driver to use...
	I0819 11:44:34.438486   19417 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 11:44:34.448223   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:44:34.455991   19417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:44:34.466731   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:44:34.471680   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:44:34.476284   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:44:34.481818   19417 ssh_runner.go:195] Run: which cri-dockerd
	I0819 11:44:34.483031   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 11:44:34.485789   19417 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 11:44:34.490610   19417 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 11:44:34.572893   19417 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 11:44:34.647997   19417 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 11:44:34.648062   19417 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 11:44:34.655497   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:44:34.729795   19417 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:44:47.949522   19417 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.219773834s)
	I0819 11:44:47.949589   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 11:44:47.954235   19417 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0819 11:44:47.961740   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:44:47.967921   19417 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 11:44:48.036259   19417 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 11:44:48.101788   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:44:48.165038   19417 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 11:44:48.170508   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:44:48.175256   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:44:48.236468   19417 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 11:44:48.274889   19417 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 11:44:48.274976   19417 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 11:44:48.277290   19417 start.go:563] Will wait 60s for crictl version
	I0819 11:44:48.277333   19417 ssh_runner.go:195] Run: which crictl
	I0819 11:44:48.278659   19417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:44:48.290840   19417 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 11:44:48.290912   19417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:44:48.303517   19417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:44:48.322011   19417 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 11:44:48.322080   19417 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 11:44:48.323428   19417 kubeadm.go:883] updating cluster {Name:running-upgrade-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53137 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 11:44:48.323480   19417 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:44:48.323513   19417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:44:48.333674   19417 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:44:48.333683   19417 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:44:48.333734   19417 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:44:48.336712   19417 ssh_runner.go:195] Run: which lz4
	I0819 11:44:48.337978   19417 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:44:48.339213   19417 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:44:48.339223   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 11:44:49.309122   19417 docker.go:649] duration metric: took 971.181834ms to copy over tarball
	I0819 11:44:49.309177   19417 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:44:50.433709   19417 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.124523291s)
	I0819 11:44:50.433723   19417 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:44:50.449605   19417 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:44:50.453270   19417 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 11:44:50.458662   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:44:50.528823   19417 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:44:51.714349   19417 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.185513167s)
	I0819 11:44:51.714446   19417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:44:51.725540   19417 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:44:51.725549   19417 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:44:51.725554   19417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 11:44:51.734390   19417 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:44:51.735533   19417 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:44:51.736432   19417 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:44:51.736463   19417 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:44:51.737786   19417 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:44:51.737870   19417 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:44:51.739095   19417 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:44:51.739018   19417 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:44:51.739978   19417 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:44:51.740068   19417 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:44:51.740939   19417 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:44:51.741893   19417 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:44:51.741933   19417 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:44:51.741948   19417 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:44:51.742663   19417 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:44:51.743162   19417 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	W0819 11:44:52.152667   19417 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:44:52.152816   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:44:52.156598   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:44:52.164289   19417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 11:44:52.164316   19417 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:44:52.164370   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:44:52.173715   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 11:44:52.176506   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0819 11:44:52.180062   19417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 11:44:52.180082   19417 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:44:52.180135   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:44:52.181346   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:44:52.181454   19417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:44:52.201012   19417 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 11:44:52.201033   19417 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:44:52.201072   19417 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 11:44:52.201084   19417 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 11:44:52.201088   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 11:44:52.201105   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 11:44:52.202488   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 11:44:52.202518   19417 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 11:44:52.202532   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 11:44:52.212518   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:44:52.219098   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:44:52.235011   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:44:52.235036   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:44:52.235094   19417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 11:44:52.235114   19417 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:44:52.235148   19417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:44:52.235151   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:44:52.235148   19417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 11:44:52.243922   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:44:52.253240   19417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 11:44:52.253267   19417 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:44:52.253324   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:44:52.279755   19417 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 11:44:52.279776   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 11:44:52.279783   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 11:44:52.279855   19417 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 11:44:52.279863   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 11:44:52.280534   19417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 11:44:52.280551   19417 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:44:52.280592   19417 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:44:52.293741   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 11:44:52.307345   19417 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:44:52.307359   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 11:44:52.320963   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 11:44:52.395917   19417 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 11:44:52.395942   19417 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 11:44:52.395949   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 11:44:52.481024   19417 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0819 11:44:52.484308   19417 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:44:52.484428   19417 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:44:52.515109   19417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 11:44:52.515136   19417 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:44:52.515188   19417 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:44:52.603115   19417 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:44:52.603131   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 11:44:53.667435   19417 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.064268708s)
	I0819 11:44:53.667498   19417 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 11:44:53.667617   19417 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.152408625s)
	I0819 11:44:53.667634   19417 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:44:53.668111   19417 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:44:53.673808   19417 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 11:44:53.673887   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 11:44:53.734164   19417 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:44:53.734191   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 11:44:53.970437   19417 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 11:44:53.970482   19417 cache_images.go:92] duration metric: took 2.244931708s to LoadCachedImages
	W0819 11:44:53.970515   19417 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0819 11:44:53.970520   19417 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 11:44:53.970588   19417 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-409000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:44:53.970655   19417 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 11:44:53.984041   19417 cni.go:84] Creating CNI manager for ""
	I0819 11:44:53.984052   19417 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:44:53.984056   19417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:44:53.984064   19417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-409000 NodeName:running-upgrade-409000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:44:53.984128   19417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-409000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:44:53.984192   19417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 11:44:53.987535   19417 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:44:53.987569   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:44:53.991065   19417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 11:44:53.996053   19417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:44:54.000793   19417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 11:44:54.006549   19417 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 11:44:54.007965   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:44:54.068728   19417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:44:54.074477   19417 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000 for IP: 10.0.2.15
	I0819 11:44:54.074484   19417 certs.go:194] generating shared ca certs ...
	I0819 11:44:54.074492   19417 certs.go:226] acquiring lock for ca certs: {Name:mk011f5d2dbb88087ec73da4d5406de1c263092b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:44:54.074725   19417 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.key
	I0819 11:44:54.074773   19417 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.key
	I0819 11:44:54.074777   19417 certs.go:256] generating profile certs ...
	I0819 11:44:54.074838   19417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.key
	I0819 11:44:54.074850   19417 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.key.b9f74b62
	I0819 11:44:54.074867   19417 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.crt.b9f74b62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 11:44:54.225799   19417 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.crt.b9f74b62 ...
	I0819 11:44:54.225804   19417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.crt.b9f74b62: {Name:mkf2d8ca8ca797d7ffddc3d3b467074546161ea0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:44:54.226093   19417 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.key.b9f74b62 ...
	I0819 11:44:54.226099   19417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.key.b9f74b62: {Name:mke19bc5d48e984f0a031e0fc3225cc048af3c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:44:54.226234   19417 certs.go:381] copying /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.crt.b9f74b62 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.crt
	I0819 11:44:54.226435   19417 certs.go:385] copying /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.key.b9f74b62 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.key
	I0819 11:44:54.226588   19417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/proxy-client.key
	I0819 11:44:54.226725   19417 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654.pem (1338 bytes)
	W0819 11:44:54.226755   19417 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654_empty.pem, impossibly tiny 0 bytes
	I0819 11:44:54.226761   19417 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:44:54.226783   19417 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:44:54.226803   19417 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:44:54.226822   19417 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem (1679 bytes)
	I0819 11:44:54.226864   19417 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem (1708 bytes)
	I0819 11:44:54.227279   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:44:54.234824   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 11:44:54.241736   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:44:54.249276   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:44:54.256684   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 11:44:54.263204   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:44:54.269835   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:44:54.277179   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:44:54.284545   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:44:54.291316   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654.pem --> /usr/share/ca-certificates/17654.pem (1338 bytes)
	I0819 11:44:54.297890   19417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem --> /usr/share/ca-certificates/176542.pem (1708 bytes)
	I0819 11:44:54.305135   19417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:44:54.310046   19417 ssh_runner.go:195] Run: openssl version
	I0819 11:44:54.311893   19417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:44:54.314626   19417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:44:54.316127   19417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:44:54.316149   19417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:44:54.317845   19417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:44:54.320789   19417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17654.pem && ln -fs /usr/share/ca-certificates/17654.pem /etc/ssl/certs/17654.pem"
	I0819 11:44:54.323726   19417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17654.pem
	I0819 11:44:54.325156   19417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:32 /usr/share/ca-certificates/17654.pem
	I0819 11:44:54.325174   19417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17654.pem
	I0819 11:44:54.326996   19417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17654.pem /etc/ssl/certs/51391683.0"
	I0819 11:44:54.329806   19417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176542.pem && ln -fs /usr/share/ca-certificates/176542.pem /etc/ssl/certs/176542.pem"
	I0819 11:44:54.333066   19417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176542.pem
	I0819 11:44:54.334451   19417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:32 /usr/share/ca-certificates/176542.pem
	I0819 11:44:54.334470   19417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176542.pem
	I0819 11:44:54.336454   19417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176542.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:44:54.340397   19417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:44:54.341939   19417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:44:54.343773   19417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:44:54.345935   19417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:44:54.347734   19417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:44:54.349756   19417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:44:54.351858   19417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:44:54.353588   19417 kubeadm.go:392] StartCluster: {Name:running-upgrade-409000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53137 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:44:54.353656   19417 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:44:54.364539   19417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:44:54.367634   19417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 11:44:54.367639   19417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 11:44:54.367664   19417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 11:44:54.370542   19417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:44:54.370582   19417 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-409000" does not appear in /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:44:54.370597   19417 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-17178/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-409000" cluster setting kubeconfig missing "running-upgrade-409000" context setting]
	I0819 11:44:54.370777   19417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:44:54.371461   19417 kapi.go:59] client config for running-upgrade-409000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101cd1990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:44:54.372340   19417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 11:44:54.375120   19417 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-409000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 11:44:54.375130   19417 kubeadm.go:1160] stopping kube-system containers ...
	I0819 11:44:54.375172   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:44:54.392085   19417 docker.go:483] Stopping containers: [e841418b2ed3 1275d7ff9f0c c856d6e29342 2f08f9ae48fe 30834ba75a0e 316c13ef6511 c6e42f0936b0 b123ae1ba397 098a5dcc915e 253c19b19fab fbd10444d98a ef9cbfc1406e]
	I0819 11:44:54.392155   19417 ssh_runner.go:195] Run: docker stop e841418b2ed3 1275d7ff9f0c c856d6e29342 2f08f9ae48fe 30834ba75a0e 316c13ef6511 c6e42f0936b0 b123ae1ba397 098a5dcc915e 253c19b19fab fbd10444d98a ef9cbfc1406e
	I0819 11:44:54.402883   19417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 11:44:54.487310   19417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:44:54.491388   19417 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 19 18:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 19 18:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 19 18:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 19 18:44 /etc/kubernetes/scheduler.conf
	
	I0819 11:44:54.491423   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/admin.conf
	I0819 11:44:54.494577   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:44:54.494608   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:44:54.498191   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/kubelet.conf
	I0819 11:44:54.501612   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:44:54.501636   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:44:54.504829   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/controller-manager.conf
	I0819 11:44:54.507576   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:44:54.507606   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:44:54.510300   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/scheduler.conf
	I0819 11:44:54.513393   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:44:54.513416   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:44:54.516283   19417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:44:54.519090   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:44:54.539311   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:44:54.908612   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:44:55.279237   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:44:55.306563   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:44:55.332450   19417 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:44:55.332547   19417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:44:55.834975   19417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:44:56.334600   19417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:44:56.338987   19417 api_server.go:72] duration metric: took 1.006543125s to wait for apiserver process to appear ...
	I0819 11:44:56.338995   19417 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:44:56.339005   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:01.341134   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:01.341167   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:06.341598   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:06.341668   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:11.342457   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:11.342541   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:16.343735   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:16.343826   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:21.345383   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:21.345495   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:26.347458   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:26.347538   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:31.349994   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:31.350076   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:36.352754   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:36.352839   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:41.355005   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:41.355091   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:46.357732   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:46.357809   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:51.360492   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:51.360567   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:45:56.363067   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:45:56.363507   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:45:56.396453   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:45:56.396592   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:45:56.416528   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:45:56.416634   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:45:56.431179   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:45:56.431263   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:45:56.443501   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:45:56.443576   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:45:56.453927   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:45:56.453991   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:45:56.464397   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:45:56.464461   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:45:56.474171   19417 logs.go:276] 0 containers: []
	W0819 11:45:56.474183   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:45:56.474237   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:45:56.484669   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:45:56.484684   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:45:56.484690   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:45:56.503131   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:45:56.503142   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:45:56.515955   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:45:56.515966   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:45:56.530351   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:45:56.530362   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:45:56.545476   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:45:56.545488   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:45:56.571943   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:45:56.571951   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:45:56.608753   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:45:56.608761   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:45:56.619895   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:45:56.619909   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:45:56.643535   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:45:56.643545   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:45:56.648041   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:45:56.648050   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:45:56.661338   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:45:56.661356   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:45:56.672142   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:45:56.672153   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:45:56.683523   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:45:56.683535   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:45:56.699389   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:45:56.699401   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:45:56.710519   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:45:56.710530   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:45:56.722435   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:45:56.722450   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:45:56.795176   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:45:56.795191   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:45:59.308700   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:04.311310   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:04.311696   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:04.347090   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:04.347215   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:04.368201   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:04.368303   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:04.383739   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:04.383818   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:04.395852   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:04.395922   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:04.407364   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:04.407432   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:04.419391   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:04.419456   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:04.429742   19417 logs.go:276] 0 containers: []
	W0819 11:46:04.429755   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:04.429806   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:04.440273   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:04.440292   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:04.440298   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:04.444686   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:04.444694   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:04.456744   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:04.456758   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:04.469626   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:04.469639   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:04.481733   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:04.481746   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:04.504907   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:04.504919   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:04.520184   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:04.520200   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:04.532632   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:04.532644   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:04.571446   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:04.571459   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:04.586402   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:04.586413   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:04.598749   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:04.598759   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:04.612680   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:04.612692   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:04.647723   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:04.647730   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:04.661741   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:04.661752   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:04.674020   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:04.674034   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:04.696689   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:04.696699   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:04.707541   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:04.707556   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:07.234409   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:12.236004   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:12.236350   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:12.278267   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:12.278373   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:12.295633   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:12.295724   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:12.309301   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:12.309374   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:12.328993   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:12.329056   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:12.339426   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:12.339483   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:12.350591   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:12.350661   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:12.361364   19417 logs.go:276] 0 containers: []
	W0819 11:46:12.361379   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:12.361434   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:12.372160   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:12.372180   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:12.372186   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:12.407394   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:12.407404   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:12.420198   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:12.420212   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:12.431695   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:12.431708   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:12.447440   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:12.447454   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:12.460068   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:12.460080   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:12.495668   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:12.495679   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:12.510859   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:12.510874   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:12.515579   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:12.515585   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:12.529559   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:12.529571   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:12.541193   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:12.541207   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:12.558238   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:12.558248   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:12.570380   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:12.570391   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:12.584689   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:12.584698   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:12.596333   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:12.596347   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:12.608077   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:12.608092   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:12.621264   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:12.621273   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:15.147888   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:20.150643   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:20.151133   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:20.190506   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:20.190644   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:20.215385   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:20.215498   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:20.232630   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:20.232702   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:20.244620   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:20.244683   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:20.255321   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:20.255384   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:20.266768   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:20.266830   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:20.284315   19417 logs.go:276] 0 containers: []
	W0819 11:46:20.284324   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:20.284372   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:20.294926   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:20.294943   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:20.294947   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:20.330155   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:20.330163   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:20.344549   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:20.344559   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:20.348835   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:20.348843   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:20.375186   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:20.375197   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:20.392501   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:20.392516   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:20.407384   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:20.407394   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:20.418135   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:20.418149   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:20.429582   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:20.429591   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:20.445342   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:20.445355   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:20.456391   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:20.456403   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:20.468155   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:20.468167   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:20.481521   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:20.481532   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:20.508020   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:20.508030   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:20.541528   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:20.541542   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:20.556029   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:20.556043   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:20.568490   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:20.568502   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:23.081241   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:28.084112   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:28.084548   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:28.124708   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:28.124826   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:28.146425   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:28.146511   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:28.161715   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:28.161794   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:28.176401   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:28.176472   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:28.188322   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:28.188386   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:28.198783   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:28.198841   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:28.208822   19417 logs.go:276] 0 containers: []
	W0819 11:46:28.208834   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:28.208891   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:28.219107   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:28.219128   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:28.219133   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:28.236247   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:28.236260   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:28.248014   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:28.248027   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:28.265584   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:28.265598   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:28.277098   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:28.277109   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:28.312716   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:28.312726   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:28.323875   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:28.323888   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:28.335573   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:28.335587   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:28.351260   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:28.351273   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:28.355624   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:28.355634   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:28.369253   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:28.369263   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:28.380382   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:28.380396   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:28.392020   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:28.392030   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:28.418159   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:28.418167   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:28.456531   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:28.456545   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:28.474783   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:28.474794   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:28.487178   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:28.487190   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:31.005225   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:36.008075   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:36.008485   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:36.041633   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:36.041756   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:36.061418   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:36.061523   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:36.079044   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:36.079118   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:36.090585   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:36.090655   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:36.101481   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:36.101541   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:36.112012   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:36.112078   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:36.122283   19417 logs.go:276] 0 containers: []
	W0819 11:46:36.122293   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:36.122340   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:36.133235   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:36.133257   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:36.133262   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:36.144797   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:36.144813   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:36.179045   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:36.179058   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:36.193376   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:36.193388   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:36.207944   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:36.207957   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:36.219435   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:36.219447   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:36.235477   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:36.235491   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:36.251074   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:36.251084   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:36.268696   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:36.268708   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:36.281730   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:36.281740   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:36.318453   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:36.318462   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:36.331194   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:36.331207   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:36.342932   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:36.342943   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:36.356722   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:36.356736   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:36.361177   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:36.361186   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:36.372929   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:36.372943   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:36.384766   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:36.384775   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:38.910949   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:43.913132   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:43.913454   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:43.953657   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:43.953781   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:43.976070   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:43.976145   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:43.991070   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:43.991151   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:44.005582   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:44.005645   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:44.017237   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:44.017303   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:44.033411   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:44.033460   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:44.045504   19417 logs.go:276] 0 containers: []
	W0819 11:46:44.045516   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:44.045555   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:44.057869   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:44.057888   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:44.057893   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:44.063116   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:44.063128   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:44.078585   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:44.078596   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:44.090246   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:44.090258   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:44.101859   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:44.101870   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:44.116320   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:44.116332   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:44.131143   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:44.131153   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:44.148882   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:44.148893   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:44.173008   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:44.173018   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:44.207855   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:44.207862   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:44.228981   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:44.228992   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:44.244354   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:44.244370   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:44.259185   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:44.259197   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:44.270519   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:44.270532   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:44.304626   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:44.304638   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:44.319084   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:44.319096   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:44.330420   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:44.330432   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:46.849928   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:51.852210   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:51.852386   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:51.864745   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:51.864819   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:51.875800   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:51.875873   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:51.886385   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:51.886448   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:51.901313   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:51.901385   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:51.912392   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:51.912455   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:51.923021   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:51.923086   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:51.933632   19417 logs.go:276] 0 containers: []
	W0819 11:46:51.933645   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:51.933697   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:51.944450   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:51.944468   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:46:51.944474   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:46:51.959922   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:51.959933   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:51.971646   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:51.971658   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:51.983574   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:51.983586   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:46:52.003415   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:46:52.003426   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:46:52.017290   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:46:52.017299   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:46:52.032221   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:52.032232   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:52.046854   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:52.046866   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:52.084320   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:52.084335   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:52.098760   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:52.098775   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:52.114835   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:46:52.114845   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:46:52.127284   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:46:52.127298   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:46:52.138749   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:52.138761   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:52.176387   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:46:52.176395   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:46:52.180651   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:46:52.180660   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:46:52.192299   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:46:52.192309   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:46:52.203881   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:46:52.203892   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:46:54.730715   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:46:59.732963   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:46:59.733242   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:46:59.761931   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:46:59.762048   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:46:59.783908   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:46:59.783986   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:46:59.796612   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:46:59.796676   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:46:59.807600   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:46:59.807675   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:46:59.817652   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:46:59.817723   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:46:59.828409   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:46:59.828469   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:46:59.839023   19417 logs.go:276] 0 containers: []
	W0819 11:46:59.839035   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:46:59.839087   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:46:59.853787   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:46:59.853804   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:46:59.853809   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:46:59.890097   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:46:59.890112   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:46:59.904542   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:46:59.904553   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:46:59.920266   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:46:59.920277   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:46:59.931511   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:46:59.931523   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:46:59.944381   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:46:59.944394   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:46:59.979790   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:46:59.979798   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:46:59.995093   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:46:59.995106   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:00.012439   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:00.012450   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:00.023854   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:00.023868   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:00.028119   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:00.028128   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:00.042692   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:00.042704   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:00.054356   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:00.054370   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:00.080267   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:00.080277   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:00.093226   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:00.093240   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:00.107471   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:00.107485   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:00.118683   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:00.118693   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:02.630818   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:07.631053   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:07.631148   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:07.643721   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:07.643796   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:07.655809   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:07.655882   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:07.670715   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:07.670792   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:07.682772   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:07.682847   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:07.695959   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:07.696032   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:07.711210   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:07.711281   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:07.725059   19417 logs.go:276] 0 containers: []
	W0819 11:47:07.725071   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:07.725129   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:07.737832   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:07.737854   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:07.737859   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:07.751054   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:07.751066   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:07.764273   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:07.764284   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:07.788030   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:07.788048   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:07.801606   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:07.801618   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:07.827846   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:07.827865   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:07.843215   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:07.843230   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:07.881268   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:07.881284   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:07.896328   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:07.896341   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:07.909873   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:07.909886   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:07.925787   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:07.925801   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:07.938529   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:07.938541   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:07.977516   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:07.977527   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:07.997913   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:07.997926   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:08.011760   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:08.011770   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:08.016359   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:08.016370   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:08.035533   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:08.035549   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:10.550863   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:15.553131   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:15.553271   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:15.565923   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:15.565998   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:15.577052   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:15.577130   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:15.587809   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:15.587900   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:15.598439   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:15.598514   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:15.609810   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:15.609882   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:15.621562   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:15.621635   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:15.632983   19417 logs.go:276] 0 containers: []
	W0819 11:47:15.632995   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:15.633058   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:15.647565   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:15.647586   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:15.647593   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:15.661686   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:15.661698   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:15.675531   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:15.675544   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:15.691269   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:15.691282   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:15.702973   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:15.702987   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:15.742113   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:15.742131   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:15.781955   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:15.781967   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:15.798430   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:15.798441   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:15.818293   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:15.818303   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:15.830369   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:15.830382   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:15.842375   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:15.842386   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:15.847025   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:15.847032   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:15.861844   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:15.861858   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:15.886635   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:15.886642   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:15.899168   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:15.899178   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:15.913187   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:15.913196   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:15.937702   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:15.937712   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:18.450523   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:23.453206   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:23.453346   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:23.468321   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:23.468400   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:23.479196   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:23.479262   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:23.490091   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:23.490160   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:23.505224   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:23.505301   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:23.515812   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:23.515878   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:23.526784   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:23.526857   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:23.537462   19417 logs.go:276] 0 containers: []
	W0819 11:47:23.537476   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:23.537534   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:23.548148   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:23.548165   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:23.548171   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:23.563303   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:23.563314   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:23.578537   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:23.578546   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:23.599485   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:23.599496   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:23.611302   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:23.611313   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:23.652923   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:23.652938   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:23.667787   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:23.667798   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:23.679472   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:23.679484   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:23.691481   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:23.691494   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:23.703088   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:23.703101   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:23.707346   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:23.707355   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:23.718678   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:23.718689   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:23.729953   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:23.729964   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:23.743213   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:23.743227   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:23.757732   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:23.757742   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:23.770412   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:23.770423   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:23.795336   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:23.795344   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:26.333724   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:31.336158   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:31.336600   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:31.374476   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:31.374614   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:31.397785   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:31.397888   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:31.413123   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:31.413193   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:31.425667   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:31.425744   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:31.436934   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:31.437007   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:31.447617   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:31.447690   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:31.457905   19417 logs.go:276] 0 containers: []
	W0819 11:47:31.457916   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:31.457974   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:31.468328   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:31.468347   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:31.468352   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:31.487655   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:31.487666   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:31.508936   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:31.508950   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:31.521222   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:31.521232   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:31.539226   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:31.539237   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:31.550758   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:31.550770   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:31.587954   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:31.587964   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:31.592276   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:31.592284   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:31.606682   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:31.606693   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:31.618394   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:31.618405   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:31.630506   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:31.630516   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:31.642012   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:31.642024   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:31.667172   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:31.667179   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:31.702625   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:31.702639   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:31.716754   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:31.716765   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:31.729258   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:31.729270   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:31.741187   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:31.741199   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:34.256134   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:39.258935   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:39.259277   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:39.295898   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:39.296017   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:39.314841   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:39.314918   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:39.328253   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:39.328328   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:39.340118   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:39.340187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:39.351499   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:39.351560   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:39.362593   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:39.362649   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:39.377109   19417 logs.go:276] 0 containers: []
	W0819 11:47:39.377124   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:39.377177   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:39.388862   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:39.388878   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:39.388883   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:39.403229   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:39.403238   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:39.417883   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:39.417893   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:39.429850   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:39.429862   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:39.441384   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:39.441395   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:39.465427   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:39.465438   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:39.500964   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:39.500973   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:39.505537   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:39.505547   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:39.539437   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:39.539453   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:39.564271   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:39.564282   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:39.575406   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:39.575421   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:39.586822   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:39.586833   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:39.599470   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:39.599482   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:39.623957   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:39.623968   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:39.642559   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:39.642573   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:39.653890   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:39.653902   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:39.668086   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:39.668097   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:42.184972   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:47.187553   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:47.187779   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:47.206826   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:47.206906   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:47.220105   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:47.220187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:47.240588   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:47.240658   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:47.251340   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:47.251405   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:47.263426   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:47.263493   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:47.274490   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:47.274558   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:47.284619   19417 logs.go:276] 0 containers: []
	W0819 11:47:47.284630   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:47.284679   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:47.295705   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:47.295726   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:47.295732   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:47.332530   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:47.332537   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:47.346762   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:47.346773   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:47.358543   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:47.358554   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:47.363464   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:47.363470   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:47.403328   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:47.403339   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:47.420527   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:47.420538   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:47.433127   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:47.433139   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:47.447413   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:47.447424   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:47.459059   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:47.459071   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:47.484314   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:47.484322   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:47.503093   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:47.503103   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:47.528482   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:47.528493   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:47.540069   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:47.540081   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:47.558052   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:47.558065   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:47.575932   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:47.575945   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:47.587139   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:47.587150   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:50.099151   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:55.101349   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:55.101583   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:55.128382   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:55.128462   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:55.141741   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:55.141818   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:55.151958   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:55.152025   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:55.162262   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:55.162336   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:55.172479   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:55.172544   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:55.184153   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:55.184223   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:55.194119   19417 logs.go:276] 0 containers: []
	W0819 11:47:55.194128   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:55.194177   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:55.204237   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:55.204258   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:55.204263   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:55.221670   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:55.221683   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:55.233944   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:55.233957   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:55.269149   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:55.269156   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:55.283118   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:55.283130   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:55.294283   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:55.294293   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:55.317205   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:55.317213   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:55.321557   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:55.321562   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:55.333973   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:55.333985   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:55.345609   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:55.345619   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:55.362616   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:55.362625   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:55.375293   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:55.375306   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:55.423259   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:55.423273   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:55.436751   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:55.436765   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:55.450630   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:55.450641   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:55.465925   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:55.465938   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:55.478162   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:55.478175   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:57.992521   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:02.994348   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:02.994455   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:03.011419   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:03.011488   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:03.024189   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:03.024275   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:03.035893   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:03.035968   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:03.048718   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:03.048793   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:03.059872   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:03.059944   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:03.072017   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:03.072088   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:03.084216   19417 logs.go:276] 0 containers: []
	W0819 11:48:03.084229   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:03.084293   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:03.101179   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:03.101201   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:03.101207   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:03.139733   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:03.139747   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:03.152505   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:03.152516   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:03.168486   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:03.168500   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:03.182332   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:03.182345   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:03.196484   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:03.196496   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:03.201571   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:03.201581   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:03.243486   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:03.243497   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:03.259051   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:03.259066   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:03.272030   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:03.272045   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:03.294688   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:03.294701   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:03.315705   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:03.315717   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:03.332141   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:03.332152   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:03.359184   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:03.359203   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:03.372621   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:03.372635   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:03.391183   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:03.391198   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:03.405574   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:03.405587   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:05.920244   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:10.922481   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:10.922663   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:10.939605   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:10.939693   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:10.954408   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:10.954482   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:10.965859   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:10.965926   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:10.977902   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:10.977979   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:10.990351   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:10.990424   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:11.001650   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:11.001727   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:11.012796   19417 logs.go:276] 0 containers: []
	W0819 11:48:11.012809   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:11.012875   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:11.025092   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:11.025111   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:11.025117   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:11.063321   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:11.063338   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:11.080015   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:11.080030   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:11.093166   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:11.093180   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:11.117880   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:11.117899   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:11.131373   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:11.131387   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:11.136516   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:11.136527   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:11.151090   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:11.151102   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:11.167569   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:11.167587   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:11.195764   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:11.195779   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:11.210357   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:11.210370   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:11.224129   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:11.224141   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:11.247152   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:11.247167   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:11.259054   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:11.259066   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:11.295466   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:11.295479   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:11.308730   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:11.308742   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:11.323887   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:11.323904   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:13.838598   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:18.840159   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:18.840349   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:18.852139   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:18.852210   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:18.862649   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:18.862715   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:18.872777   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:18.872840   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:18.883000   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:18.883069   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:18.893310   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:18.893378   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:18.903956   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:18.904024   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:18.914614   19417 logs.go:276] 0 containers: []
	W0819 11:48:18.914625   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:18.914677   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:18.924672   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:18.924689   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:18.924695   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:18.961393   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:18.961407   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:18.974674   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:18.974686   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:18.989485   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:18.989497   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:19.013241   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:19.013250   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:19.017832   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:19.017840   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:19.032205   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:19.032217   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:19.045242   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:19.045255   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:19.059177   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:19.059191   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:19.075910   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:19.075921   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:19.094219   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:19.094228   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:19.115399   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:19.115409   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:19.127223   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:19.127238   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:19.138519   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:19.138530   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:19.150068   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:19.150083   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:19.186177   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:19.186188   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:19.206486   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:19.206500   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:21.720465   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:26.722729   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:26.722835   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:26.735158   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:26.735234   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:26.746761   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:26.746835   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:26.757910   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:26.757998   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:26.769843   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:26.769912   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:26.780734   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:26.780801   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:26.793954   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:26.794024   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:26.804184   19417 logs.go:276] 0 containers: []
	W0819 11:48:26.804198   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:26.804254   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:26.814772   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:26.814792   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:26.814799   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:26.854433   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:26.854449   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:26.869197   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:26.869210   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:26.882487   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:26.882499   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:26.894427   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:26.894438   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:26.905909   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:26.905922   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:26.910431   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:26.910440   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:26.930590   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:26.930600   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:26.941984   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:26.941995   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:26.958105   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:26.958118   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:26.969781   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:26.969793   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:26.994457   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:26.994468   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:27.029431   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:27.029442   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:27.043689   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:27.043702   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:27.061604   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:27.061615   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:27.075152   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:27.075164   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:27.091084   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:27.091094   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:29.604414   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:34.606771   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:34.607142   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:34.643144   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:34.643250   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:34.663621   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:34.663723   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:34.678566   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:34.678644   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:34.692708   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:34.692783   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:34.703578   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:34.703648   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:34.713885   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:34.713948   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:34.729046   19417 logs.go:276] 0 containers: []
	W0819 11:48:34.729058   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:34.729115   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:34.740029   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:34.740049   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:34.740055   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:34.751016   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:34.751027   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:34.786978   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:34.786991   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:34.804634   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:34.804648   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:34.816394   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:34.816408   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:34.839161   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:34.839171   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:34.857582   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:34.857595   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:34.872410   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:34.872426   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:34.888208   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:34.888221   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:34.905842   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:34.905853   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:34.917036   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:34.917047   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:34.928848   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:34.928860   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:34.933693   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:34.933699   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:34.945039   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:34.945050   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:34.957191   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:34.957202   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:34.968938   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:34.968950   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:35.003758   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:35.003768   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:37.522244   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:42.524663   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:42.524804   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:42.540808   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:42.540886   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:42.551743   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:42.551815   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:42.566584   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:42.566650   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:42.577203   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:42.577262   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:42.587204   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:42.587264   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:42.597823   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:42.597884   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:42.608306   19417 logs.go:276] 0 containers: []
	W0819 11:48:42.608317   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:42.608371   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:42.619616   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:42.619634   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:42.619639   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:42.630796   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:42.630807   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:42.644080   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:42.644093   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:42.662285   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:42.662297   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:42.674249   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:42.674261   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:42.698002   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:42.698012   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:42.709800   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:42.709813   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:42.747699   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:42.747710   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:42.753065   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:42.753080   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:42.789892   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:42.789903   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:42.805221   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:42.805244   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:42.821619   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:42.821640   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:42.838844   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:42.838856   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:42.850458   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:42.850468   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:42.861844   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:42.861856   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:42.876120   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:42.876131   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:42.887847   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:42.887858   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:45.401530   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:50.403931   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:50.404187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:50.433790   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:50.433925   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:50.452346   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:50.452424   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:50.468093   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:50.468165   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:50.479468   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:50.479534   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:50.493216   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:50.493280   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:50.507043   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:50.507113   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:50.517860   19417 logs.go:276] 0 containers: []
	W0819 11:48:50.517871   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:50.517921   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:50.528310   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:50.528328   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:50.528333   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:50.532736   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:50.532743   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:50.545191   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:50.545203   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:50.559222   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:50.559235   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:50.570176   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:50.570186   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:50.592013   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:50.592021   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:50.614055   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:50.614064   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:50.649464   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:50.649474   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:50.684683   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:50.684695   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:50.699215   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:50.699227   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:50.710902   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:50.710914   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:50.722637   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:50.722651   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:50.738224   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:50.738237   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:50.750221   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:50.750234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:50.761317   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:50.761330   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:50.776307   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:50.776318   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:50.788393   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:50.788407   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:53.302075   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:58.304420   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:58.304459   19417 kubeadm.go:597] duration metric: took 4m3.937976042s to restartPrimaryControlPlane
	W0819 11:48:58.304490   19417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:48:58.304505   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:48:59.291796   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:48:59.296890   19417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:48:59.299876   19417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:48:59.302886   19417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:48:59.302893   19417 kubeadm.go:157] found existing configuration files:
	
	I0819 11:48:59.302919   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/admin.conf
	I0819 11:48:59.305429   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:48:59.305451   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:48:59.308806   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/kubelet.conf
	I0819 11:48:59.311898   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:48:59.311920   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:48:59.314519   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/controller-manager.conf
	I0819 11:48:59.317352   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:48:59.317373   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:48:59.320674   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/scheduler.conf
	I0819 11:48:59.323854   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:48:59.323879   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:48:59.326371   19417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:48:59.392602   19417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:49:06.020451   19417 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:49:06.020522   19417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:49:06.020558   19417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:49:06.020599   19417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:49:06.020686   19417 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:49:06.020772   19417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:49:06.025012   19417 out.go:235]   - Generating certificates and keys ...
	I0819 11:49:06.025048   19417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:49:06.025081   19417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:49:06.025119   19417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:49:06.025157   19417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:49:06.025196   19417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:49:06.025226   19417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:49:06.025261   19417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:49:06.025307   19417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:49:06.025346   19417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:49:06.025385   19417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:49:06.025409   19417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:49:06.025438   19417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:49:06.025460   19417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:49:06.025490   19417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:49:06.025526   19417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:49:06.025557   19417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:49:06.025618   19417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:49:06.025671   19417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:49:06.025694   19417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:49:06.025726   19417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:49:06.031871   19417 out.go:235]   - Booting up control plane ...
	I0819 11:49:06.031914   19417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:49:06.031962   19417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:49:06.031999   19417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:49:06.032046   19417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:49:06.032134   19417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:49:06.032175   19417 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502520 seconds
	I0819 11:49:06.032232   19417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:49:06.032304   19417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:49:06.032341   19417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:49:06.032440   19417 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-409000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:49:06.032476   19417 kubeadm.go:310] [bootstrap-token] Using token: 25421g.u6qtiwyx3kaxk0p9
	I0819 11:49:06.034851   19417 out.go:235]   - Configuring RBAC rules ...
	I0819 11:49:06.034897   19417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:49:06.034939   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:49:06.035020   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:49:06.035099   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:49:06.035166   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:49:06.035218   19417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:49:06.035276   19417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:49:06.035302   19417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:49:06.035324   19417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:49:06.035328   19417 kubeadm.go:310] 
	I0819 11:49:06.035360   19417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:49:06.035366   19417 kubeadm.go:310] 
	I0819 11:49:06.035404   19417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:49:06.035408   19417 kubeadm.go:310] 
	I0819 11:49:06.035422   19417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:49:06.035456   19417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:49:06.035482   19417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:49:06.035486   19417 kubeadm.go:310] 
	I0819 11:49:06.035512   19417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:49:06.035516   19417 kubeadm.go:310] 
	I0819 11:49:06.035544   19417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:49:06.035547   19417 kubeadm.go:310] 
	I0819 11:49:06.035575   19417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:49:06.035609   19417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:49:06.035643   19417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:49:06.035645   19417 kubeadm.go:310] 
	I0819 11:49:06.035684   19417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:49:06.035719   19417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:49:06.035721   19417 kubeadm.go:310] 
	I0819 11:49:06.035759   19417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25421g.u6qtiwyx3kaxk0p9 \
	I0819 11:49:06.035809   19417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae \
	I0819 11:49:06.035819   19417 kubeadm.go:310] 	--control-plane 
	I0819 11:49:06.035821   19417 kubeadm.go:310] 
	I0819 11:49:06.035861   19417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:49:06.035863   19417 kubeadm.go:310] 
	I0819 11:49:06.035904   19417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25421g.u6qtiwyx3kaxk0p9 \
	I0819 11:49:06.035955   19417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae 
	I0819 11:49:06.035960   19417 cni.go:84] Creating CNI manager for ""
	I0819 11:49:06.035966   19417 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:49:06.048864   19417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:49:06.052982   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:49:06.056190   19417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:49:06.060916   19417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:49:06.060958   19417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:49:06.060980   19417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-409000 minikube.k8s.io/updated_at=2024_08_19T11_49_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=running-upgrade-409000 minikube.k8s.io/primary=true
	I0819 11:49:06.103867   19417 kubeadm.go:1113] duration metric: took 42.938333ms to wait for elevateKubeSystemPrivileges
	I0819 11:49:06.103884   19417 ops.go:34] apiserver oom_adj: -16
	I0819 11:49:06.103970   19417 kubeadm.go:394] duration metric: took 4m11.751582167s to StartCluster
	I0819 11:49:06.103983   19417 settings.go:142] acquiring lock: {Name:mkd10d56bae48d75d53289d9920be83758fb5ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:49:06.104152   19417 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:49:06.104563   19417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:49:06.104777   19417 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:49:06.104830   19417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:49:06.104890   19417 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-409000"
	I0819 11:49:06.104900   19417 config.go:182] Loaded profile config "running-upgrade-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:49:06.104903   19417 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-409000"
	W0819 11:49:06.104907   19417 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:49:06.104918   19417 host.go:66] Checking if "running-upgrade-409000" exists ...
	I0819 11:49:06.104914   19417 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-409000"
	I0819 11:49:06.104952   19417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-409000"
	I0819 11:49:06.105907   19417 kapi.go:59] client config for running-upgrade-409000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101cd1990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:49:06.106038   19417 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-409000"
	W0819 11:49:06.106058   19417 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:49:06.106066   19417 host.go:66] Checking if "running-upgrade-409000" exists ...
	I0819 11:49:06.107909   19417 out.go:177] * Verifying Kubernetes components...
	I0819 11:49:06.108290   19417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:49:06.112083   19417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:49:06.112089   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:49:06.114923   19417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:49:06.118914   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:49:06.122894   19417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:49:06.122901   19417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:49:06.122907   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:49:06.190159   19417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:49:06.195741   19417 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:49:06.195786   19417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:49:06.199640   19417 api_server.go:72] duration metric: took 94.849917ms to wait for apiserver process to appear ...
	I0819 11:49:06.199648   19417 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:49:06.199654   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:06.226096   19417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:49:06.256007   19417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:49:06.551595   19417 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:49:06.551607   19417 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:49:11.201411   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:11.201463   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:16.201759   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:16.201778   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:21.202502   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:21.202521   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:26.145039   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:26.145086   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:31.145851   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:31.145871   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:36.146899   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:36.146939   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:49:36.494582   19417 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:49:36.498480   19417 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:49:36.505483   19417 addons.go:510] duration metric: took 30.458901041s for enable addons: enabled=[storage-provisioner]
	I0819 11:49:41.148014   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:41.148069   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:46.149638   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:46.149734   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:51.151681   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:51.151734   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:56.153949   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:56.153998   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:01.156242   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:01.156285   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:06.158431   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:06.158532   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:06.171066   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:06.171141   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:06.181897   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:06.181957   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:06.192431   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:06.192523   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:06.206840   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:06.206917   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:06.217931   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:06.218001   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:06.231135   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:06.231206   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:06.253362   19417 logs.go:276] 0 containers: []
	W0819 11:50:06.253375   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:06.253434   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:06.264298   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:06.264315   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:06.264320   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:06.302210   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:06.302222   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:06.307233   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:06.307239   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:06.342943   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:06.342956   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:06.358007   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:06.358022   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:06.371600   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:06.371616   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:06.387755   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:06.387769   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:06.402086   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:06.402098   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:06.413256   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:06.413271   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:06.436582   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:06.436589   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:06.449255   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:06.449271   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:06.464301   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:06.464317   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:06.481343   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:06.481354   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:09.001149   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:14.003455   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:14.003694   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:14.030307   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:14.030410   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:14.047199   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:14.047291   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:14.060521   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:14.060599   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:14.071899   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:14.071971   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:14.082236   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:14.082305   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:14.093742   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:14.093805   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:14.104088   19417 logs.go:276] 0 containers: []
	W0819 11:50:14.104099   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:14.104150   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:14.115037   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:14.115053   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:14.115058   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:14.126894   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:14.126908   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:14.164461   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:14.164472   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:14.169120   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:14.169128   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:14.182555   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:14.182566   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:14.194387   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:14.194398   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:14.206976   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:14.206989   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:14.221816   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:14.221831   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:14.233741   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:14.233752   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:14.259435   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:14.259443   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:14.270417   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:14.270430   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:14.309734   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:14.309752   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:14.324384   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:14.324395   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:16.843671   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:21.846068   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:21.846202   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:21.858480   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:21.858546   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:21.869796   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:21.869855   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:21.881513   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:21.881588   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:21.892475   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:21.892541   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:21.903164   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:21.903231   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:21.913917   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:21.913977   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:21.924472   19417 logs.go:276] 0 containers: []
	W0819 11:50:21.924483   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:21.924533   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:21.940257   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:21.940273   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:21.940279   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:21.953756   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:21.953766   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:21.978417   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:21.978430   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:22.015171   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:22.015182   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:22.030205   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:22.030215   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:22.045013   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:22.045026   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:22.058762   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:22.058774   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:22.075247   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:22.075263   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:22.087392   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:22.087415   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:22.091934   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:22.091941   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:22.130125   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:22.130139   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:22.142363   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:22.142378   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:22.154650   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:22.154662   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:24.674802   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:29.676396   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:29.676588   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:29.703555   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:29.703677   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:29.721228   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:29.721313   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:29.735228   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:29.735297   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:29.746933   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:29.746999   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:29.758101   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:29.758170   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:29.769471   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:29.769536   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:29.780211   19417 logs.go:276] 0 containers: []
	W0819 11:50:29.780226   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:29.780286   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:29.791209   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:29.791224   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:29.791229   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:29.828222   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:29.828234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:29.841309   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:29.841322   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:29.859775   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:29.859793   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:29.872323   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:29.872336   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:29.895548   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:29.895557   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:29.911523   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:29.911534   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:29.923012   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:29.923025   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:29.927647   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:29.927657   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:29.963465   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:29.963479   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:29.978222   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:29.978235   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:29.992592   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:29.992601   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:30.004361   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:30.004375   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:32.521495   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:37.523806   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:37.524002   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:37.544601   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:37.544695   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:37.560479   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:37.560550   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:37.572091   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:37.572165   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:37.583871   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:37.583950   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:37.594562   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:37.594631   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:37.605276   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:37.605339   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:37.616305   19417 logs.go:276] 0 containers: []
	W0819 11:50:37.616317   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:37.616378   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:37.627167   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:37.627183   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:37.627189   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:37.642248   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:37.642261   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:37.654445   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:37.654457   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:37.693676   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:37.693685   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:37.698676   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:37.698682   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:37.742556   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:37.742574   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:37.758663   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:37.758676   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:37.773477   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:37.773488   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:37.785680   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:37.785694   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:37.797550   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:37.797564   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:37.814565   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:37.814575   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:37.835273   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:37.835285   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:37.848067   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:37.848080   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:40.373462   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:45.375807   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:45.376023   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:45.401944   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:45.402046   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:45.416526   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:45.416600   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:45.428377   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:45.428440   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:45.439393   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:45.439461   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:45.451259   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:45.451328   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:45.462436   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:45.462499   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:45.473462   19417 logs.go:276] 0 containers: []
	W0819 11:50:45.473476   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:45.473531   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:45.484720   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:45.484738   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:45.484743   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:45.489477   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:45.489484   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:45.525413   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:45.525430   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:45.540525   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:45.540536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:45.552892   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:45.552903   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:45.565179   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:45.565189   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:45.579131   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:45.579144   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:45.602809   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:45.602821   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:45.641553   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:45.641570   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:45.656427   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:45.656440   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:45.669671   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:45.669685   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:45.685547   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:45.685559   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:45.705393   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:45.705404   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:48.220391   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:53.221701   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:53.221828   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:53.235857   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:53.235930   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:53.247988   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:53.248061   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:53.260534   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:53.260604   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:53.271918   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:53.271987   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:53.284339   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:53.284415   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:53.295823   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:53.295900   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:53.306523   19417 logs.go:276] 0 containers: []
	W0819 11:50:53.306533   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:53.306584   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:53.318079   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:53.318094   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:53.318099   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:53.357053   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:53.357067   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:53.392409   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:53.392420   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:53.404140   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:53.404151   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:53.421778   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:53.421788   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:53.433287   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:53.433297   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:53.455262   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:53.455273   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:53.479838   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:53.479847   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:53.484351   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:53.484358   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:53.499066   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:53.499076   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:53.513547   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:53.513558   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:53.525078   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:53.525088   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:53.537082   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:53.537093   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:56.053984   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:01.056231   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:01.056364   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:01.077623   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:01.077716   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:01.092072   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:01.092142   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:01.103710   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:01.103781   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:01.119391   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:01.119468   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:01.130171   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:01.130239   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:01.141863   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:01.141927   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:01.152050   19417 logs.go:276] 0 containers: []
	W0819 11:51:01.152060   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:01.152113   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:01.168325   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:01.168340   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:01.168345   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:01.179989   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:01.180000   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:01.191885   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:01.191896   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:01.227166   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:01.227178   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:01.231436   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:01.231442   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:01.246179   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:01.246192   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:01.268786   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:01.268799   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:01.280841   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:01.280854   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:01.295466   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:01.295476   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:01.307119   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:01.307130   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:01.324865   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:01.324874   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:01.362116   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:01.362128   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:01.384875   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:01.384881   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:03.897917   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:08.900080   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:08.900187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:08.910871   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:08.910941   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:08.920939   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:08.921015   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:08.931334   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:08.931405   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:08.941655   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:08.941720   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:08.951820   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:08.951886   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:08.962204   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:08.962264   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:08.971796   19417 logs.go:276] 0 containers: []
	W0819 11:51:08.971808   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:08.971867   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:08.982648   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:08.982662   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:08.982668   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:08.994036   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:08.994048   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:09.008865   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:09.008876   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:09.026452   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:09.026464   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:09.065316   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:09.065329   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:09.070336   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:09.070345   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:09.105328   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:09.105339   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:09.124385   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:09.124395   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:09.138899   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:09.138912   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:09.150493   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:09.150504   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:09.174014   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:09.174026   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:09.185390   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:09.185404   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:09.197240   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:09.197254   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:11.711698   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:16.712929   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:16.713055   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:16.725330   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:16.725407   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:16.737375   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:16.737455   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:16.749350   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:16.749419   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:16.760832   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:16.760900   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:16.773079   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:16.773150   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:16.785782   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:16.785855   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:16.800270   19417 logs.go:276] 0 containers: []
	W0819 11:51:16.800285   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:16.800349   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:16.813726   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:16.813742   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:16.813748   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:16.828725   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:16.828740   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:16.845483   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:16.845497   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:16.856966   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:16.856978   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:16.868484   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:16.868499   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:16.873226   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:16.874078   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:16.908730   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:16.908744   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:16.923904   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:16.923917   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:16.935890   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:16.935905   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:16.950785   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:16.950796   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:16.969022   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:16.969032   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:16.993085   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:16.993102   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:17.004979   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:17.004993   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:19.544176   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:24.546434   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:24.546519   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:24.557760   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:24.557834   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:24.568996   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:24.569098   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:24.581301   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:24.581378   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:24.592929   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:24.592998   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:24.604935   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:24.605007   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:24.616093   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:24.616160   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:24.627277   19417 logs.go:276] 0 containers: []
	W0819 11:51:24.627289   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:24.627351   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:24.639092   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:24.639113   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:24.639119   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:24.654877   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:24.654886   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:24.669268   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:24.669280   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:24.681534   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:24.681546   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:24.700563   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:24.700576   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:24.716396   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:24.716406   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:24.743172   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:24.743181   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:24.747557   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:24.747564   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:24.761657   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:24.761668   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:24.778778   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:24.778789   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:24.790182   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:24.790193   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:24.825443   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:24.825453   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:24.837242   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:24.837255   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:24.848461   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:24.848473   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:24.888907   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:24.888925   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:27.403228   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:32.405704   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:32.405830   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:32.416998   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:32.417063   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:32.431447   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:32.431514   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:32.449651   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:32.449725   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:32.461324   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:32.461443   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:32.472556   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:32.472623   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:32.484992   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:32.485062   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:32.496138   19417 logs.go:276] 0 containers: []
	W0819 11:51:32.496149   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:32.496210   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:32.508522   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:32.508541   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:32.508546   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:32.521709   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:32.521720   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:32.534020   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:32.534034   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:32.548513   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:32.548524   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:32.590258   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:32.590274   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:32.607250   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:32.607263   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:32.620108   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:32.620128   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:32.636185   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:32.636195   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:32.649663   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:32.649674   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:32.673862   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:32.673875   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:32.689662   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:32.689672   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:32.726526   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:32.726536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:32.739033   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:32.739044   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:32.761074   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:32.761088   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:32.774173   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:32.774185   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:35.280740   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:40.282877   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:40.282968   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:40.294357   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:40.294428   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:40.305715   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:40.305779   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:40.317445   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:40.317519   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:40.332740   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:40.332816   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:40.344367   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:40.344437   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:40.355559   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:40.355627   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:40.366437   19417 logs.go:276] 0 containers: []
	W0819 11:51:40.366449   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:40.366509   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:40.382283   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:40.382304   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:40.382310   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:40.397748   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:40.397760   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:40.411071   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:40.411083   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:40.450293   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:40.450307   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:40.487134   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:40.487146   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:40.502579   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:40.502591   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:40.516666   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:40.516677   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:40.529672   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:40.529686   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:40.542346   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:40.542357   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:40.555985   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:40.555997   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:40.561652   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:40.561661   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:40.574524   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:40.574537   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:40.599238   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:40.599250   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:40.624261   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:40.624272   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:40.640002   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:40.640013   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:43.157874   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:48.159944   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:48.160087   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:48.176243   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:48.176325   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:48.190781   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:48.190856   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:48.202281   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:48.202353   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:48.213849   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:48.213920   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:48.225224   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:48.225296   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:48.236609   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:48.236679   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:48.247967   19417 logs.go:276] 0 containers: []
	W0819 11:51:48.247979   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:48.248035   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:48.259501   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:48.259521   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:48.259527   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:48.278429   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:48.278444   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:48.291479   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:48.291492   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:48.312126   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:48.312136   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:48.324909   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:48.324921   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:48.337514   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:48.337525   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:48.351718   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:48.351729   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:48.364174   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:48.364185   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:48.381238   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:48.381250   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:48.394840   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:48.394850   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:48.433356   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:48.433375   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:48.458227   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:48.458248   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:48.495401   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:48.495412   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:48.510979   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:48.510990   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:48.524773   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:48.524785   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:51.032136   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:56.034913   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:56.035342   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:56.073346   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:56.073474   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:56.093841   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:56.093909   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:56.109987   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:56.110052   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:56.124853   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:56.124924   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:56.142197   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:56.142268   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:56.155546   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:56.155611   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:56.176819   19417 logs.go:276] 0 containers: []
	W0819 11:51:56.176830   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:56.176888   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:56.188266   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:56.188283   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:56.188288   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:56.201699   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:56.201709   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:56.214028   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:56.214038   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:56.229353   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:56.229365   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:56.241987   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:56.241997   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:56.246765   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:56.246775   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:56.286485   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:56.286498   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:56.302070   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:56.302081   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:56.327269   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:56.327292   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:56.340684   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:56.340698   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:56.354771   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:56.354785   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:56.395607   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:56.395623   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:56.411426   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:56.411438   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:56.424708   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:56.424723   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:56.437710   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:56.437720   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:58.963353   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:03.965524   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:03.965686   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:03.980277   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:03.980356   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:03.991262   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:03.991335   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:04.002211   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:04.002284   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:04.012661   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:04.012726   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:04.023302   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:04.023369   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:04.033614   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:04.033644   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:04.044196   19417 logs.go:276] 0 containers: []
	W0819 11:52:04.044209   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:04.044271   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:04.055938   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:04.055955   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:04.055960   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:04.068344   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:04.068356   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:04.081341   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:04.081353   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:04.097396   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:04.097407   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:04.138236   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:04.138247   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:04.154141   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:04.154154   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:04.169829   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:04.169838   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:04.188200   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:04.188212   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:04.200894   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:04.200904   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:04.227011   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:04.227021   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:04.232386   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:04.232393   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:04.270538   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:04.270549   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:04.283538   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:04.283549   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:04.301841   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:04.301860   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:04.316222   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:04.316234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:06.832471   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:11.834822   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:11.835185   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:11.866757   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:11.866883   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:11.885497   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:11.885586   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:11.903512   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:11.903594   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:11.915091   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:11.915159   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:11.925317   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:11.925387   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:11.935609   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:11.935672   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:11.945602   19417 logs.go:276] 0 containers: []
	W0819 11:52:11.945610   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:11.945662   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:11.957441   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:11.957460   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:11.957464   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:11.998589   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:11.998606   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:12.011435   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:12.011446   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:12.024588   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:12.024599   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:12.037140   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:12.037152   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:12.049234   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:12.049246   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:12.064229   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:12.064242   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:12.076209   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:12.076223   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:12.089011   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:12.089025   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:12.107678   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:12.107696   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:12.136037   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:12.136053   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:12.141541   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:12.141560   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:12.182991   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:12.183006   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:12.197637   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:12.197647   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:12.214664   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:12.214676   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:14.735586   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:19.737688   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:19.737795   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:19.748716   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:19.748788   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:19.760373   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:19.760482   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:19.771775   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:19.771845   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:19.783919   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:19.783985   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:19.797740   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:19.797810   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:19.808744   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:19.808813   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:19.819651   19417 logs.go:276] 0 containers: []
	W0819 11:52:19.819663   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:19.819724   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:19.834409   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:19.834426   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:19.834433   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:19.849747   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:19.849760   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:19.865270   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:19.865283   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:19.904142   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:19.904159   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:19.920381   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:19.920397   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:19.937104   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:19.937115   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:19.949283   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:19.949295   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:19.977941   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:19.977961   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:19.991540   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:19.991552   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:20.006022   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:20.006037   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:20.011591   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:20.011605   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:20.051318   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:20.051332   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:20.063511   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:20.063522   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:20.076235   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:20.076249   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:20.093940   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:20.093951   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:22.607338   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:27.609670   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:27.609836   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:27.628941   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:27.629049   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:27.644455   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:27.644548   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:27.660129   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:27.660200   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:27.670390   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:27.670481   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:27.680946   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:27.681020   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:27.691844   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:27.691912   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:27.709365   19417 logs.go:276] 0 containers: []
	W0819 11:52:27.709376   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:27.709432   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:27.720612   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:27.720631   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:27.720637   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:27.737885   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:27.737896   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:27.749904   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:27.749914   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:27.765098   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:27.765109   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:27.777503   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:27.777513   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:27.816222   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:27.816234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:27.828523   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:27.828536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:27.843437   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:27.843448   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:27.855653   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:27.855664   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:27.870031   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:27.870046   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:27.881953   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:27.881964   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:27.894547   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:27.894557   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:27.909209   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:27.909219   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:27.931858   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:27.931867   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:27.969308   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:27.969321   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:30.475601   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:35.477746   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:35.477858   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:35.489751   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:35.489829   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:35.500344   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:35.500411   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:35.511092   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:35.511158   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:35.524620   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:35.524679   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:35.535381   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:35.535452   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:35.548159   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:35.548232   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:35.558720   19417 logs.go:276] 0 containers: []
	W0819 11:52:35.558731   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:35.558791   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:35.571181   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:35.571198   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:35.571204   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:35.583090   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:35.583101   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:35.594871   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:35.594882   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:35.606776   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:35.606788   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:35.642523   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:35.642536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:35.656989   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:35.657001   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:35.671245   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:35.671259   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:35.696314   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:35.696322   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:35.700644   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:35.700652   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:35.712890   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:35.712900   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:35.731990   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:35.732002   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:35.743777   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:35.743788   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:35.759138   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:35.759154   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:35.799153   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:35.799160   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:35.811243   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:35.811254   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:38.325444   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:43.327690   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:43.327952   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:43.352450   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:43.352566   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:43.368581   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:43.368663   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:43.381891   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:43.381960   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:43.397452   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:43.397531   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:43.421839   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:43.421927   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:43.444123   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:43.444198   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:43.459851   19417 logs.go:276] 0 containers: []
	W0819 11:52:43.459867   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:43.459937   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:43.474680   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:43.474700   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:43.474705   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:43.488344   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:43.488356   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:43.500944   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:43.500958   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:43.513273   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:43.513285   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:43.518064   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:43.518073   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:43.551989   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:43.552005   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:43.566375   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:43.566386   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:43.582584   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:43.582595   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:43.598319   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:43.598333   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:43.613670   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:43.613690   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:43.635155   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:43.635170   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:43.659928   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:43.659941   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:43.671552   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:43.671567   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:43.710898   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:43.710912   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:43.726534   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:43.726547   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:46.239579   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:51.241952   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:51.242332   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:51.279602   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:51.279733   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:51.299227   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:51.299321   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:51.313946   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:51.314030   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:51.326446   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:51.326521   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:51.339398   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:51.339464   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:51.350085   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:51.350151   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:51.360860   19417 logs.go:276] 0 containers: []
	W0819 11:52:51.360873   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:51.360929   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:51.377680   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:51.377695   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:51.377700   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:51.382253   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:51.382260   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:51.394269   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:51.394281   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:51.407085   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:51.407099   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:51.435343   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:51.435352   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:51.461499   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:51.461511   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:51.498072   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:51.498083   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:51.513286   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:51.513299   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:51.525488   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:51.525499   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:51.540752   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:51.540763   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:51.552762   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:51.552775   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:51.589913   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:51.589923   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:51.603848   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:51.603859   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:51.616039   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:51.616052   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:51.627651   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:51.627663   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:54.139292   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:59.139607   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:59.139824   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:59.162135   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:59.162248   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:59.176620   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:59.176692   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:59.188616   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:59.188693   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:59.203579   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:59.203638   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:59.213847   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:59.213917   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:59.224365   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:59.224441   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:59.237139   19417 logs.go:276] 0 containers: []
	W0819 11:52:59.237158   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:59.237221   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:59.248396   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:59.248413   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:59.248419   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:59.262584   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:59.262595   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:59.274286   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:59.274295   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:59.298764   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:59.298773   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:59.337984   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:59.337994   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:59.342643   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:59.342653   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:59.358554   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:59.358564   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:59.370751   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:59.370764   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:59.389307   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:59.389318   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:59.427397   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:59.427409   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:59.440215   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:59.440226   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:59.453770   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:59.453785   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:59.465390   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:59.465400   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:59.486005   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:59.486016   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:59.502921   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:59.502931   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:02.016351   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:07.018505   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:07.022610   19417 out.go:201] 
	W0819 11:53:07.025511   19417 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 11:53:07.025516   19417 out.go:270] * 
	* 
	W0819 11:53:07.025923   19417 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:53:07.039504   19417 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-409000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-19 11:53:07.127157 -0700 PDT m=+1311.376874626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-409000 -n running-upgrade-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-409000 -n running-upgrade-409000: exit status 2 (15.605954667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-409000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-995000          | force-systemd-flag-995000 | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-214000              | force-systemd-env-214000  | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-214000           | force-systemd-env-214000  | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT | 19 Aug 24 11:43 PDT |
	| start   | -p docker-flags-446000                | docker-flags-446000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-995000             | force-systemd-flag-995000 | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-995000          | force-systemd-flag-995000 | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT | 19 Aug 24 11:43 PDT |
	| start   | -p cert-expiration-386000             | cert-expiration-386000    | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-446000 ssh               | docker-flags-446000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-446000 ssh               | docker-flags-446000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-446000                | docker-flags-446000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT | 19 Aug 24 11:43 PDT |
	| start   | -p cert-options-587000                | cert-options-587000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-587000 ssh               | cert-options-587000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-587000 -- sudo        | cert-options-587000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-587000                | cert-options-587000       | jenkins | v1.33.1 | 19 Aug 24 11:43 PDT | 19 Aug 24 11:43 PDT |
	| start   | -p running-upgrade-409000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 11:43 PDT | 19 Aug 24 11:44 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-409000             | running-upgrade-409000    | jenkins | v1.33.1 | 19 Aug 24 11:44 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-386000             | cert-expiration-386000    | jenkins | v1.33.1 | 19 Aug 24 11:46 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-386000             | cert-expiration-386000    | jenkins | v1.33.1 | 19 Aug 24 11:46 PDT | 19 Aug 24 11:46 PDT |
	| start   | -p kubernetes-upgrade-246000          | kubernetes-upgrade-246000 | jenkins | v1.33.1 | 19 Aug 24 11:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-246000          | kubernetes-upgrade-246000 | jenkins | v1.33.1 | 19 Aug 24 11:46 PDT | 19 Aug 24 11:46 PDT |
	| start   | -p kubernetes-upgrade-246000          | kubernetes-upgrade-246000 | jenkins | v1.33.1 | 19 Aug 24 11:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-246000          | kubernetes-upgrade-246000 | jenkins | v1.33.1 | 19 Aug 24 11:46 PDT | 19 Aug 24 11:46 PDT |
	| start   | -p stopped-upgrade-604000             | minikube                  | jenkins | v1.26.0 | 19 Aug 24 11:46 PDT | 19 Aug 24 11:47 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-604000 stop           | minikube                  | jenkins | v1.26.0 | 19 Aug 24 11:47 PDT | 19 Aug 24 11:47 PDT |
	| start   | -p stopped-upgrade-604000             | stopped-upgrade-604000    | jenkins | v1.33.1 | 19 Aug 24 11:47 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:47:38
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:47:38.062992   19545 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:47:38.063131   19545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:47:38.063135   19545 out.go:358] Setting ErrFile to fd 2...
	I0819 11:47:38.063138   19545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:47:38.063284   19545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:47:38.064378   19545 out.go:352] Setting JSON to false
	I0819 11:47:38.082571   19545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8225,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:47:38.082641   19545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:47:38.087776   19545 out.go:177] * [stopped-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:47:38.095754   19545 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:47:38.095802   19545 notify.go:220] Checking for updates...
	I0819 11:47:38.102620   19545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:47:38.105755   19545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:47:38.108637   19545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:47:38.111690   19545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:47:38.114670   19545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:47:38.117876   19545 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:47:38.120672   19545 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:47:38.123659   19545 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:47:38.127674   19545 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:47:38.134696   19545 start.go:297] selected driver: qemu2
	I0819 11:47:38.134701   19545 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:47:38.134749   19545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:47:38.137219   19545 cni.go:84] Creating CNI manager for ""
	I0819 11:47:38.137238   19545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:47:38.137266   19545 start.go:340] cluster config:
	{Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:47:38.137318   19545 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:47:38.145689   19545 out.go:177] * Starting "stopped-upgrade-604000" primary control-plane node in "stopped-upgrade-604000" cluster
	I0819 11:47:38.149714   19545 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:47:38.149733   19545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 11:47:38.149742   19545 cache.go:56] Caching tarball of preloaded images
	I0819 11:47:38.149807   19545 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:47:38.149813   19545 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 11:47:38.149876   19545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/config.json ...
	I0819 11:47:38.150314   19545 start.go:360] acquireMachinesLock for stopped-upgrade-604000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:47:38.150344   19545 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "stopped-upgrade-604000"
	I0819 11:47:38.150354   19545 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:47:38.150359   19545 fix.go:54] fixHost starting: 
	I0819 11:47:38.150486   19545 fix.go:112] recreateIfNeeded on stopped-upgrade-604000: state=Stopped err=<nil>
	W0819 11:47:38.150494   19545 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:47:38.158675   19545 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-604000" ...
	I0819 11:47:34.256134   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:38.162644   19545 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:47:38.162714   19545 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53326-:22,hostfwd=tcp::53327-:2376,hostname=stopped-upgrade-604000 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/disk.qcow2
	I0819 11:47:38.209512   19545 main.go:141] libmachine: STDOUT: 
	I0819 11:47:38.209563   19545 main.go:141] libmachine: STDERR: 
	I0819 11:47:38.209569   19545 main.go:141] libmachine: Waiting for VM to start (ssh -p 53326 docker@127.0.0.1)...
	I0819 11:47:39.258935   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:39.259277   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:39.295898   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:39.296017   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:39.314841   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:39.314918   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:39.328253   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:39.328328   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:39.340118   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:39.340187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:39.351499   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:39.351560   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:39.362593   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:39.362649   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:39.377109   19417 logs.go:276] 0 containers: []
	W0819 11:47:39.377124   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:39.377177   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:39.388862   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:39.388878   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:39.388883   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:39.403229   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:39.403238   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:39.417883   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:39.417893   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:39.429850   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:39.429862   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:39.441384   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:39.441395   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:39.465427   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:39.465438   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:39.500964   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:39.500973   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:39.505537   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:39.505547   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:39.539437   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:39.539453   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:39.564271   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:39.564282   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:39.575406   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:39.575421   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:39.586822   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:39.586833   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:39.599470   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:39.599482   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:39.623957   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:39.623968   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:39.642559   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:39.642573   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:39.653890   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:39.653902   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:39.668086   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:39.668097   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:42.184972   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:47.187553   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:47.187779   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:47.206826   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:47.206906   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:47.220105   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:47.220187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:47.240588   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:47.240658   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:47.251340   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:47.251405   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:47.263426   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:47.263493   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:47.274490   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:47.274558   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:47.284619   19417 logs.go:276] 0 containers: []
	W0819 11:47:47.284630   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:47.284679   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:47.295705   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:47.295726   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:47.295732   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:47.332530   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:47.332537   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:47.346762   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:47.346773   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:47.358543   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:47.358554   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:47.363464   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:47.363470   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:47.403328   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:47.403339   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:47.420527   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:47.420538   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:47.433127   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:47.433139   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:47.447413   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:47.447424   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:47.459059   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:47.459071   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:47.484314   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:47.484322   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:47.503093   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:47.503103   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:47.528482   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:47.528493   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:47.540069   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:47.540081   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:47.558052   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:47.558065   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:47.575932   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:47.575945   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:47.587139   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:47.587150   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:50.099151   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:55.101349   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:47:55.101583   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:47:55.128382   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:47:55.128462   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:47:55.141741   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:47:55.141818   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:47:55.151958   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:47:55.152025   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:47:55.162262   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:47:55.162336   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:47:55.172479   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:47:55.172544   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:47:55.184153   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:47:55.184223   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:47:55.194119   19417 logs.go:276] 0 containers: []
	W0819 11:47:55.194128   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:47:55.194177   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:47:55.204237   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:47:55.204258   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:47:55.204263   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:47:55.221670   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:47:55.221683   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:47:55.233944   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:47:55.233957   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:47:55.269149   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:47:55.269156   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:47:55.283118   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:47:55.283130   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:47:55.294283   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:47:55.294293   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:47:55.317205   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:47:55.317213   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:47:55.321557   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:47:55.321562   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:47:55.333973   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:47:55.333985   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:47:55.345609   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:47:55.345619   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:47:55.362616   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:47:55.362625   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:47:55.375293   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:47:55.375306   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:47:55.423259   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:47:55.423273   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:47:55.436751   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:47:55.436765   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:47:55.450630   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:47:55.450641   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:47:55.465925   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:47:55.465938   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:47:55.478162   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:47:55.478175   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:47:57.992521   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:47:59.141607   19545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/config.json ...
	I0819 11:47:59.142471   19545 machine.go:93] provisionDockerMachine start ...
	I0819 11:47:59.142646   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.143262   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.143277   19545 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:47:59.219092   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 11:47:59.219127   19545 buildroot.go:166] provisioning hostname "stopped-upgrade-604000"
	I0819 11:47:59.219254   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.219496   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.219509   19545 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-604000 && echo "stopped-upgrade-604000" | sudo tee /etc/hostname
	I0819 11:47:59.285807   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-604000
	
	I0819 11:47:59.285861   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.285987   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.285999   19545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-604000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-604000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-604000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:47:59.341515   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:47:59.341526   19545 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-17178/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-17178/.minikube}
	I0819 11:47:59.341537   19545 buildroot.go:174] setting up certificates
	I0819 11:47:59.341541   19545 provision.go:84] configureAuth start
	I0819 11:47:59.341550   19545 provision.go:143] copyHostCerts
	I0819 11:47:59.341612   19545 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem, removing ...
	I0819 11:47:59.341618   19545 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem
	I0819 11:47:59.341717   19545 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem (1082 bytes)
	I0819 11:47:59.341903   19545 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem, removing ...
	I0819 11:47:59.341907   19545 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem
	I0819 11:47:59.341953   19545 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem (1123 bytes)
	I0819 11:47:59.342056   19545 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem, removing ...
	I0819 11:47:59.342059   19545 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem
	I0819 11:47:59.342099   19545 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem (1679 bytes)
	I0819 11:47:59.342188   19545 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-604000 san=[127.0.0.1 localhost minikube stopped-upgrade-604000]
	I0819 11:47:59.387432   19545 provision.go:177] copyRemoteCerts
	I0819 11:47:59.387472   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:47:59.387481   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:47:59.418246   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:47:59.424690   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 11:47:59.431180   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 11:47:59.438295   19545 provision.go:87] duration metric: took 96.744084ms to configureAuth
	I0819 11:47:59.438304   19545 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:47:59.438418   19545 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:47:59.438456   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.438541   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.438545   19545 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 11:47:59.491709   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 11:47:59.491717   19545 buildroot.go:70] root file system type: tmpfs
	I0819 11:47:59.491764   19545 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 11:47:59.491812   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.491929   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.491962   19545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 11:47:59.551032   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 11:47:59.551080   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.551194   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.551206   19545 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 11:47:59.915884   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 11:47:59.915898   19545 machine.go:96] duration metric: took 773.418916ms to provisionDockerMachine
	I0819 11:47:59.915905   19545 start.go:293] postStartSetup for "stopped-upgrade-604000" (driver="qemu2")
	I0819 11:47:59.915911   19545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:47:59.915981   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:47:59.915993   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:47:59.947472   19545 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:47:59.948887   19545 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 11:47:59.948897   19545 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-17178/.minikube/addons for local assets ...
	I0819 11:47:59.948980   19545 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-17178/.minikube/files for local assets ...
	I0819 11:47:59.949072   19545 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem -> 176542.pem in /etc/ssl/certs
	I0819 11:47:59.949164   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:47:59.951752   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem --> /etc/ssl/certs/176542.pem (1708 bytes)
	I0819 11:47:59.958919   19545 start.go:296] duration metric: took 43.009459ms for postStartSetup
	I0819 11:47:59.958933   19545 fix.go:56] duration metric: took 21.808678458s for fixHost
	I0819 11:47:59.958967   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.959073   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.959077   19545 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:48:00.011252   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093280.007042630
	
	I0819 11:48:00.011261   19545 fix.go:216] guest clock: 1724093280.007042630
	I0819 11:48:00.011265   19545 fix.go:229] Guest: 2024-08-19 11:48:00.00704263 -0700 PDT Remote: 2024-08-19 11:47:59.958935 -0700 PDT m=+21.922470459 (delta=48.10763ms)
	I0819 11:48:00.011276   19545 fix.go:200] guest clock delta is within tolerance: 48.10763ms
	I0819 11:48:00.011279   19545 start.go:83] releasing machines lock for "stopped-upgrade-604000", held for 21.86103475s
	I0819 11:48:00.011346   19545 ssh_runner.go:195] Run: cat /version.json
	I0819 11:48:00.011350   19545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:48:00.011358   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:48:00.011375   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	W0819 11:48:00.012058   19545 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53326: connect: connection refused
	I0819 11:48:00.012074   19545 retry.go:31] will retry after 309.091232ms: dial tcp [::1]:53326: connect: connection refused
	W0819 11:48:00.357567   19545 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 11:48:00.357658   19545 ssh_runner.go:195] Run: systemctl --version
	I0819 11:48:00.360480   19545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:48:00.363143   19545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:48:00.363201   19545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 11:48:00.366987   19545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 11:48:00.372559   19545 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:48:00.372570   19545 start.go:495] detecting cgroup driver to use...
	I0819 11:48:00.372648   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:48:00.380366   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 11:48:00.383643   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:48:00.386965   19545 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:48:00.386989   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:48:00.390116   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:48:00.392968   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:48:00.395671   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:48:00.398897   19545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:48:00.402208   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:48:00.405282   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:48:00.407983   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:48:00.411233   19545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:48:00.414195   19545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:48:00.416935   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:00.496189   19545 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:48:00.506938   19545 start.go:495] detecting cgroup driver to use...
	I0819 11:48:00.507000   19545 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 11:48:00.512318   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:48:00.516956   19545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:48:00.525482   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:48:00.530181   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:48:00.534637   19545 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 11:48:00.594497   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:48:00.598996   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:48:00.604453   19545 ssh_runner.go:195] Run: which cri-dockerd
	I0819 11:48:00.605734   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 11:48:00.608280   19545 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 11:48:00.613276   19545 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 11:48:00.693522   19545 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 11:48:00.770034   19545 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 11:48:00.770109   19545 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 11:48:00.775723   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:00.853763   19545 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:48:02.005741   19545 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151967333s)
	I0819 11:48:02.005799   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 11:48:02.010635   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:48:02.015151   19545 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 11:48:02.099678   19545 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 11:48:02.177760   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:02.255104   19545 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 11:48:02.261254   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:48:02.265437   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:02.341549   19545 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 11:48:02.379962   19545 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 11:48:02.380045   19545 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 11:48:02.383672   19545 start.go:563] Will wait 60s for crictl version
	I0819 11:48:02.383729   19545 ssh_runner.go:195] Run: which crictl
	I0819 11:48:02.385336   19545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:48:02.400804   19545 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 11:48:02.400867   19545 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:48:02.417626   19545 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:48:02.437780   19545 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 11:48:02.437844   19545 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 11:48:02.439098   19545 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:48:02.443122   19545 kubeadm.go:883] updating cluster {Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 11:48:02.443165   19545 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:48:02.443207   19545 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:48:02.453809   19545 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:48:02.453817   19545 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:48:02.453870   19545 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:48:02.456782   19545 ssh_runner.go:195] Run: which lz4
	I0819 11:48:02.457948   19545 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:48:02.459168   19545 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:48:02.459181   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 11:48:02.994348   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:02.994455   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:03.011419   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:03.011488   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:03.024189   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:03.024275   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:03.035893   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:03.035968   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:03.048718   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:03.048793   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:03.059872   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:03.059944   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:03.072017   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:03.072088   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:03.084216   19417 logs.go:276] 0 containers: []
	W0819 11:48:03.084229   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:03.084293   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:03.101179   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:03.101201   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:03.101207   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:03.139733   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:03.139747   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:03.152505   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:03.152516   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:03.168486   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:03.168500   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:03.182332   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:03.182345   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:03.196484   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:03.196496   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:03.201571   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:03.201581   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:03.243486   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:03.243497   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:03.259051   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:03.259066   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:03.272030   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:03.272045   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:03.294688   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:03.294701   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:03.315705   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:03.315717   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:03.332141   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:03.332152   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:03.359184   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:03.359203   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:03.372621   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:03.372635   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:03.391183   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:03.391198   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:03.405574   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:03.405587   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:03.419834   19545 docker.go:649] duration metric: took 961.927042ms to copy over tarball
	I0819 11:48:03.419895   19545 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:48:04.580733   19545 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.160824542s)
	I0819 11:48:04.580748   19545 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:48:04.596182   19545 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:48:04.599517   19545 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 11:48:04.604627   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:04.673355   19545 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:48:06.190127   19545 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.516762042s)
	I0819 11:48:06.190206   19545 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:48:06.207854   19545 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:48:06.207864   19545 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:48:06.207869   19545 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 11:48:06.211713   19545 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.213598   19545 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.215492   19545 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.215623   19545 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.217385   19545 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.217466   19545 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:48:06.218922   19545 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.218970   19545 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.219968   19545 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.220012   19545 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:48:06.221179   19545 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.221210   19545 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.222312   19545 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.222355   19545 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.223200   19545 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.223845   19545 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.670944   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.671444   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.676091   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0819 11:48:06.684378   19545 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:48:06.684538   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.693131   19545 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 11:48:06.693165   19545 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.693216   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.697082   19545 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 11:48:06.697098   19545 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.697139   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.701481   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.705315   19545 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 11:48:06.705339   19545 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 11:48:06.705388   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 11:48:06.713147   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.722529   19545 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 11:48:06.722554   19545 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.722611   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.724685   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.725980   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 11:48:06.726002   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:48:06.726103   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:48:06.737264   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:48:06.737396   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 11:48:06.737409   19545 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 11:48:06.737425   19545 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.737465   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.746613   19545 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 11:48:06.746634   19545 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.746691   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.751121   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:48:06.751244   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:48:06.753145   19545 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 11:48:06.753151   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 11:48:06.753166   19545 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.753173   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 11:48:06.753181   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 11:48:06.753193   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 11:48:06.753205   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.772340   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 11:48:06.774673   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 11:48:06.774700   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 11:48:06.774715   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 11:48:06.787266   19545 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 11:48:06.787282   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 11:48:06.789611   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 11:48:06.860968   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0819 11:48:06.863217   19545 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:48:06.863314   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.869527   19545 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:48:06.869538   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 11:48:06.900719   19545 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 11:48:06.900744   19545 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.900812   19545 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.963600   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 11:48:06.966581   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:48:06.966702   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:48:06.980011   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 11:48:06.980043   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 11:48:07.046480   19545 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:48:07.046498   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 11:48:07.394228   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 11:48:07.394252   19545 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:48:07.394260   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 11:48:07.534750   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 11:48:07.534790   19545 cache_images.go:92] duration metric: took 1.326920583s to LoadCachedImages
	W0819 11:48:07.534839   19545 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0819 11:48:07.534845   19545 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 11:48:07.534900   19545 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-604000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:48:07.534964   19545 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 11:48:07.548514   19545 cni.go:84] Creating CNI manager for ""
	I0819 11:48:07.548528   19545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:48:07.548536   19545 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:48:07.548544   19545 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-604000 NodeName:stopped-upgrade-604000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:48:07.548606   19545 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-604000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:48:07.548662   19545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 11:48:07.551611   19545 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:48:07.551640   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:48:07.554748   19545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 11:48:07.559909   19545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:48:07.564981   19545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 11:48:07.570045   19545 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 11:48:07.571382   19545 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:48:07.575272   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:07.652911   19545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:48:07.662383   19545 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000 for IP: 10.0.2.15
	I0819 11:48:07.662393   19545 certs.go:194] generating shared ca certs ...
	I0819 11:48:07.662402   19545 certs.go:226] acquiring lock for ca certs: {Name:mk011f5d2dbb88087ec73da4d5406de1c263092b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.662565   19545 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.key
	I0819 11:48:07.662609   19545 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.key
	I0819 11:48:07.662614   19545 certs.go:256] generating profile certs ...
	I0819 11:48:07.662677   19545 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.key
	I0819 11:48:07.662697   19545 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6
	I0819 11:48:07.662705   19545 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 11:48:07.743846   19545 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6 ...
	I0819 11:48:07.743862   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6: {Name:mkce586ba565d84314129b208c6d671e64385521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.744186   19545 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6 ...
	I0819 11:48:07.744195   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6: {Name:mkde12f695304baaf9217221c44d62f8633d153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.744333   19545 certs.go:381] copying /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt
	I0819 11:48:07.746444   19545 certs.go:385] copying /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key
	I0819 11:48:07.746603   19545 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/proxy-client.key
	I0819 11:48:07.746733   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654.pem (1338 bytes)
	W0819 11:48:07.746755   19545 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654_empty.pem, impossibly tiny 0 bytes
	I0819 11:48:07.746760   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:48:07.746798   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:48:07.746817   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:48:07.746835   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem (1679 bytes)
	I0819 11:48:07.746873   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem (1708 bytes)
	I0819 11:48:07.747219   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:48:07.754279   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 11:48:07.761161   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:48:07.767798   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:48:07.774561   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 11:48:07.781585   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:48:07.788399   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:48:07.795352   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:48:07.802817   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem --> /usr/share/ca-certificates/176542.pem (1708 bytes)
	I0819 11:48:07.809955   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:48:07.816444   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654.pem --> /usr/share/ca-certificates/17654.pem (1338 bytes)
	I0819 11:48:07.823341   19545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:48:07.828679   19545 ssh_runner.go:195] Run: openssl version
	I0819 11:48:07.830545   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176542.pem && ln -fs /usr/share/ca-certificates/176542.pem /etc/ssl/certs/176542.pem"
	I0819 11:48:07.833689   19545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176542.pem
	I0819 11:48:07.835150   19545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:32 /usr/share/ca-certificates/176542.pem
	I0819 11:48:07.835173   19545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176542.pem
	I0819 11:48:07.837077   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176542.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:48:07.840052   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:48:07.843408   19545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:48:07.844889   19545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:48:07.844907   19545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:48:07.846599   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:48:07.849916   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17654.pem && ln -fs /usr/share/ca-certificates/17654.pem /etc/ssl/certs/17654.pem"
	I0819 11:48:07.852907   19545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17654.pem
	I0819 11:48:07.854362   19545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:32 /usr/share/ca-certificates/17654.pem
	I0819 11:48:07.854388   19545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17654.pem
	I0819 11:48:07.856379   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17654.pem /etc/ssl/certs/51391683.0"
	I0819 11:48:07.859770   19545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:48:07.861382   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:48:07.863515   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:48:07.865588   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:48:07.867651   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:48:07.869564   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:48:07.871334   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:48:07.873316   19545 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:48:07.873384   19545 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:48:07.883296   19545 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:48:07.886585   19545 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 11:48:07.886593   19545 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 11:48:07.886616   19545 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 11:48:07.889301   19545 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:48:07.889596   19545 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-604000" does not appear in /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:48:07.889688   19545 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-17178/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-604000" cluster setting kubeconfig missing "stopped-upgrade-604000" context setting]
	I0819 11:48:07.889877   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.890305   19545 kapi.go:59] client config for stopped-upgrade-604000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105aed990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:48:07.890629   19545 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 11:48:07.893159   19545 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-604000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 11:48:07.893168   19545 kubeadm.go:1160] stopping kube-system containers ...
	I0819 11:48:07.893205   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:48:07.903756   19545 docker.go:483] Stopping containers: [04973e14da79 07703ddc91e4 7a7ed811dead e935629bad41 e5fb176acee3 e9101e64955c 16596966724a bb9919797493]
	I0819 11:48:07.903818   19545 ssh_runner.go:195] Run: docker stop 04973e14da79 07703ddc91e4 7a7ed811dead e935629bad41 e5fb176acee3 e9101e64955c 16596966724a bb9919797493
	I0819 11:48:07.914827   19545 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 11:48:07.920653   19545 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:48:07.923663   19545 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:48:07.923669   19545 kubeadm.go:157] found existing configuration files:
	
	I0819 11:48:07.923691   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf
	I0819 11:48:07.926769   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:48:07.926792   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:48:07.929728   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf
	I0819 11:48:07.932150   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:48:07.932173   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:48:07.935225   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf
	I0819 11:48:07.938132   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:48:07.938154   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:48:07.940618   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf
	I0819 11:48:07.943489   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:48:07.943513   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:48:07.946511   19545 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:48:07.949257   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:07.969901   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:05.920244   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:08.438752   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.572862   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.602887   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.631752   19545 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:48:08.631845   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:09.133158   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:09.633911   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:09.639216   19545 api_server.go:72] duration metric: took 1.007472458s to wait for apiserver process to appear ...
	I0819 11:48:09.639229   19545 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:48:09.639238   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:10.922481   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:10.922663   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:10.939605   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:10.939693   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:10.954408   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:10.954482   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:10.965859   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:10.965926   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:10.977902   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:10.977979   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:10.990351   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:10.990424   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:11.001650   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:11.001727   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:11.012796   19417 logs.go:276] 0 containers: []
	W0819 11:48:11.012809   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:11.012875   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:11.025092   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:11.025111   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:11.025117   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:11.063321   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:11.063338   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:11.080015   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:11.080030   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:11.093166   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:11.093180   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:11.117880   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:11.117899   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:11.131373   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:11.131387   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:11.136516   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:11.136527   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:11.151090   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:11.151102   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:11.167569   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:11.167587   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:11.195764   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:11.195779   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:11.210357   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:11.210370   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:11.224129   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:11.224141   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:11.247152   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:11.247167   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:11.259054   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:11.259066   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:11.295466   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:11.295479   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:11.308730   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:11.308742   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:11.323887   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:11.323904   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:14.639783   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:14.639842   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:13.838598   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:19.641288   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:19.641334   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:18.840159   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:18.840349   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:18.852139   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:18.852210   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:18.862649   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:18.862715   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:18.872777   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:18.872840   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:18.883000   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:18.883069   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:18.893310   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:18.893378   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:18.903956   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:18.904024   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:18.914614   19417 logs.go:276] 0 containers: []
	W0819 11:48:18.914625   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:18.914677   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:18.924672   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:18.924689   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:18.924695   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:18.961393   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:18.961407   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:18.974674   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:18.974686   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:18.989485   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:18.989497   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:19.013241   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:19.013250   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:19.017832   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:19.017840   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:19.032205   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:19.032217   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:19.045242   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:19.045255   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:19.059177   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:19.059191   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:19.075910   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:19.075921   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:19.094219   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:19.094228   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:19.115399   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:19.115409   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:19.127223   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:19.127238   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:19.138519   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:19.138530   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:19.150068   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:19.150083   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:19.186177   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:19.186188   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:19.206486   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:19.206500   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:21.720465   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:24.641757   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:24.641809   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:26.722729   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:26.722835   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:26.735158   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:26.735234   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:26.746761   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:26.746835   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:26.757910   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:26.757998   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:26.769843   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:26.769912   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:26.780734   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:26.780801   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:26.793954   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:26.794024   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:26.804184   19417 logs.go:276] 0 containers: []
	W0819 11:48:26.804198   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:26.804254   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:26.814772   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:26.814792   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:26.814799   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:26.854433   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:26.854449   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:26.869197   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:26.869210   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:26.882487   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:26.882499   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:26.894427   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:26.894438   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:26.905909   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:26.905922   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:26.910431   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:26.910440   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:26.930590   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:26.930600   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:26.941984   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:26.941995   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:26.958105   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:26.958118   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:26.969781   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:26.969793   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:26.994457   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:26.994468   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:27.029431   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:27.029442   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:27.043689   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:27.043702   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:27.061604   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:27.061615   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:27.075152   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:27.075164   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:27.091084   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:27.091094   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:29.642354   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:29.642390   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:29.604414   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:34.642898   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:34.642933   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:34.606771   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:34.607142   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:34.643144   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:34.643250   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:34.663621   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:34.663723   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:34.678566   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:34.678644   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:34.692708   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:34.692783   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:34.703578   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:34.703648   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:34.713885   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:34.713948   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:34.729046   19417 logs.go:276] 0 containers: []
	W0819 11:48:34.729058   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:34.729115   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:34.740029   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:34.740049   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:34.740055   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:34.751016   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:34.751027   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:34.786978   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:34.786991   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:34.804634   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:34.804648   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:34.816394   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:34.816408   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:34.839161   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:34.839171   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:34.857582   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:34.857595   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:34.872410   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:34.872426   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:34.888208   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:34.888221   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:34.905842   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:34.905853   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:34.917036   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:34.917047   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:34.928848   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:34.928860   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:34.933693   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:34.933699   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:34.945039   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:34.945050   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:34.957191   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:34.957202   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:34.968938   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:34.968950   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:35.003758   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:35.003768   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:37.522244   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:39.643658   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:39.643698   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:42.524663   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:42.524804   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:42.540808   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:42.540886   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:42.551743   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:42.551815   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:42.566584   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:42.566650   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:42.577203   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:42.577262   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:42.587204   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:42.587264   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:42.597823   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:42.597884   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:42.608306   19417 logs.go:276] 0 containers: []
	W0819 11:48:42.608317   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:42.608371   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:42.619616   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:42.619634   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:42.619639   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:42.630796   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:42.630807   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:42.644080   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:42.644093   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:42.662285   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:42.662297   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:42.674249   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:42.674261   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:42.698002   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:42.698012   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:42.709800   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:42.709813   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:42.747699   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:42.747710   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:42.753065   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:42.753080   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:42.789892   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:42.789903   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:42.805221   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:42.805244   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:42.821619   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:42.821640   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:42.838844   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:42.838856   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:42.850458   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:42.850468   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:42.861844   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:42.861856   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:42.876120   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:42.876131   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:42.887847   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:42.887858   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:44.644955   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:44.645007   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:45.401530   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:49.646661   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:49.646728   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:50.403931   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:50.404187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:48:50.433790   19417 logs.go:276] 2 containers: [5e36e77adae3 c856d6e29342]
	I0819 11:48:50.433925   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:48:50.452346   19417 logs.go:276] 2 containers: [22d8a4cc2e5d c6e42f0936b0]
	I0819 11:48:50.452424   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:48:50.468093   19417 logs.go:276] 1 containers: [b4aa030d5e41]
	I0819 11:48:50.468165   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:48:50.479468   19417 logs.go:276] 2 containers: [1b1a6c62fc93 098a5dcc915e]
	I0819 11:48:50.479534   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:48:50.493216   19417 logs.go:276] 1 containers: [f1268c4fc5da]
	I0819 11:48:50.493280   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:48:50.507043   19417 logs.go:276] 2 containers: [2b6fc57c9dd4 2f08f9ae48fe]
	I0819 11:48:50.507113   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:48:50.517860   19417 logs.go:276] 0 containers: []
	W0819 11:48:50.517871   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:48:50.517921   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:48:50.528310   19417 logs.go:276] 2 containers: [a5053756fa3b c34ae12a902c]
	I0819 11:48:50.528328   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:48:50.528333   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:48:50.532736   19417 logs.go:123] Gathering logs for kube-apiserver [c856d6e29342] ...
	I0819 11:48:50.532743   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c856d6e29342"
	I0819 11:48:50.545191   19417 logs.go:123] Gathering logs for etcd [22d8a4cc2e5d] ...
	I0819 11:48:50.545203   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22d8a4cc2e5d"
	I0819 11:48:50.559222   19417 logs.go:123] Gathering logs for coredns [b4aa030d5e41] ...
	I0819 11:48:50.559235   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4aa030d5e41"
	I0819 11:48:50.570176   19417 logs.go:123] Gathering logs for kube-controller-manager [2b6fc57c9dd4] ...
	I0819 11:48:50.570186   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b6fc57c9dd4"
	I0819 11:48:50.592013   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:48:50.592021   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:48:50.614055   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:48:50.614064   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:48:50.649464   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:48:50.649474   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:48:50.684683   19417 logs.go:123] Gathering logs for kube-apiserver [5e36e77adae3] ...
	I0819 11:48:50.684695   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e36e77adae3"
	I0819 11:48:50.699215   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:48:50.699227   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:48:50.710902   19417 logs.go:123] Gathering logs for kube-scheduler [1b1a6c62fc93] ...
	I0819 11:48:50.710914   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1a6c62fc93"
	I0819 11:48:50.722637   19417 logs.go:123] Gathering logs for kube-scheduler [098a5dcc915e] ...
	I0819 11:48:50.722651   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 098a5dcc915e"
	I0819 11:48:50.738224   19417 logs.go:123] Gathering logs for kube-proxy [f1268c4fc5da] ...
	I0819 11:48:50.738237   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1268c4fc5da"
	I0819 11:48:50.750221   19417 logs.go:123] Gathering logs for storage-provisioner [c34ae12a902c] ...
	I0819 11:48:50.750234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c34ae12a902c"
	I0819 11:48:50.761317   19417 logs.go:123] Gathering logs for etcd [c6e42f0936b0] ...
	I0819 11:48:50.761330   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6e42f0936b0"
	I0819 11:48:50.776307   19417 logs.go:123] Gathering logs for kube-controller-manager [2f08f9ae48fe] ...
	I0819 11:48:50.776318   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f08f9ae48fe"
	I0819 11:48:50.788393   19417 logs.go:123] Gathering logs for storage-provisioner [a5053756fa3b] ...
	I0819 11:48:50.788407   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5053756fa3b"
	I0819 11:48:53.302075   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:54.647475   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:54.647558   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:58.304420   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:58.304459   19417 kubeadm.go:597] duration metric: took 4m3.937976042s to restartPrimaryControlPlane
	W0819 11:48:58.304490   19417 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:48:58.304505   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:48:59.648908   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:59.648933   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:59.291796   19417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:48:59.296890   19417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:48:59.299876   19417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:48:59.302886   19417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:48:59.302893   19417 kubeadm.go:157] found existing configuration files:
	
	I0819 11:48:59.302919   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/admin.conf
	I0819 11:48:59.305429   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:48:59.305451   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:48:59.308806   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/kubelet.conf
	I0819 11:48:59.311898   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:48:59.311920   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:48:59.314519   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/controller-manager.conf
	I0819 11:48:59.317352   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:48:59.317373   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:48:59.320674   19417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/scheduler.conf
	I0819 11:48:59.323854   19417 kubeadm.go:163] "https://control-plane.minikube.internal:53137" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53137 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:48:59.323879   19417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:48:59.326371   19417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:48:59.392602   19417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:49:06.020451   19417 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:49:06.020522   19417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:49:06.020558   19417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:49:06.020599   19417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:49:06.020686   19417 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:49:06.020772   19417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:49:06.025012   19417 out.go:235]   - Generating certificates and keys ...
	I0819 11:49:06.025048   19417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:49:06.025081   19417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:49:06.025119   19417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:49:06.025157   19417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:49:06.025196   19417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:49:06.025226   19417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:49:06.025261   19417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:49:06.025307   19417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:49:06.025346   19417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:49:06.025385   19417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:49:06.025409   19417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:49:06.025438   19417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:49:06.025460   19417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:49:06.025490   19417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:49:06.025526   19417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:49:06.025557   19417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:49:06.025618   19417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:49:06.025671   19417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:49:06.025694   19417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:49:06.025726   19417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:49:06.031871   19417 out.go:235]   - Booting up control plane ...
	I0819 11:49:06.031914   19417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:49:06.031962   19417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:49:06.031999   19417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:49:06.032046   19417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:49:06.032134   19417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:49:06.032175   19417 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502520 seconds
	I0819 11:49:06.032232   19417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:49:06.032304   19417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:49:06.032341   19417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:49:06.032440   19417 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-409000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:49:06.032476   19417 kubeadm.go:310] [bootstrap-token] Using token: 25421g.u6qtiwyx3kaxk0p9
	I0819 11:49:06.034851   19417 out.go:235]   - Configuring RBAC rules ...
	I0819 11:49:06.034897   19417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:49:06.034939   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:49:06.035020   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:49:06.035099   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:49:06.035166   19417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:49:06.035218   19417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:49:06.035276   19417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:49:06.035302   19417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:49:06.035324   19417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:49:06.035328   19417 kubeadm.go:310] 
	I0819 11:49:06.035360   19417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:49:06.035366   19417 kubeadm.go:310] 
	I0819 11:49:06.035404   19417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:49:06.035408   19417 kubeadm.go:310] 
	I0819 11:49:06.035422   19417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:49:06.035456   19417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:49:06.035482   19417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:49:06.035486   19417 kubeadm.go:310] 
	I0819 11:49:06.035512   19417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:49:06.035516   19417 kubeadm.go:310] 
	I0819 11:49:06.035544   19417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:49:06.035547   19417 kubeadm.go:310] 
	I0819 11:49:06.035575   19417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:49:06.035609   19417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:49:06.035643   19417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:49:06.035645   19417 kubeadm.go:310] 
	I0819 11:49:06.035684   19417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:49:06.035719   19417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:49:06.035721   19417 kubeadm.go:310] 
	I0819 11:49:06.035759   19417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25421g.u6qtiwyx3kaxk0p9 \
	I0819 11:49:06.035809   19417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae \
	I0819 11:49:06.035819   19417 kubeadm.go:310] 	--control-plane 
	I0819 11:49:06.035821   19417 kubeadm.go:310] 
	I0819 11:49:06.035861   19417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:49:06.035863   19417 kubeadm.go:310] 
	I0819 11:49:06.035904   19417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25421g.u6qtiwyx3kaxk0p9 \
	I0819 11:49:06.035955   19417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae 
	I0819 11:49:06.035960   19417 cni.go:84] Creating CNI manager for ""
	I0819 11:49:06.035966   19417 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:49:06.048864   19417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:49:06.052982   19417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:49:06.056190   19417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:49:06.060916   19417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:49:06.060958   19417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:49:06.060980   19417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-409000 minikube.k8s.io/updated_at=2024_08_19T11_49_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=running-upgrade-409000 minikube.k8s.io/primary=true
	I0819 11:49:06.103867   19417 kubeadm.go:1113] duration metric: took 42.938333ms to wait for elevateKubeSystemPrivileges
	I0819 11:49:06.103884   19417 ops.go:34] apiserver oom_adj: -16
	I0819 11:49:06.103970   19417 kubeadm.go:394] duration metric: took 4m11.751582167s to StartCluster
	I0819 11:49:06.103983   19417 settings.go:142] acquiring lock: {Name:mkd10d56bae48d75d53289d9920be83758fb5ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:49:06.104152   19417 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:49:06.104563   19417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:49:06.104777   19417 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:49:06.104830   19417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:49:06.104890   19417 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-409000"
	I0819 11:49:06.104900   19417 config.go:182] Loaded profile config "running-upgrade-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:49:06.104903   19417 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-409000"
	W0819 11:49:06.104907   19417 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:49:06.104918   19417 host.go:66] Checking if "running-upgrade-409000" exists ...
	I0819 11:49:06.104914   19417 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-409000"
	I0819 11:49:06.104952   19417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-409000"
	I0819 11:49:06.105907   19417 kapi.go:59] client config for running-upgrade-409000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/running-upgrade-409000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101cd1990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:49:06.106038   19417 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-409000"
	W0819 11:49:06.106058   19417 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:49:06.106066   19417 host.go:66] Checking if "running-upgrade-409000" exists ...
	I0819 11:49:06.107909   19417 out.go:177] * Verifying Kubernetes components...
	I0819 11:49:06.108290   19417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:49:06.112083   19417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:49:06.112089   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:49:06.114923   19417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:49:04.649447   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:04.649482   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:06.118914   19417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:49:06.122894   19417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:49:06.122901   19417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:49:06.122907   19417 sshutil.go:53] new ssh client: &{IP:localhost Port:53105 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/running-upgrade-409000/id_rsa Username:docker}
	I0819 11:49:06.190159   19417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:49:06.195741   19417 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:49:06.195786   19417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:49:06.199640   19417 api_server.go:72] duration metric: took 94.849917ms to wait for apiserver process to appear ...
	I0819 11:49:06.199648   19417 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:49:06.199654   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:06.226096   19417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:49:06.256007   19417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:49:06.551595   19417 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:49:06.551607   19417 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:49:09.651761   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:09.652203   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:09.685001   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:09.685137   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:09.704680   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:09.704778   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:09.718731   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:09.718829   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:09.731460   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:09.731534   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:09.742526   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:09.742597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:09.754124   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:09.754203   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:09.764968   19545 logs.go:276] 0 containers: []
	W0819 11:49:09.764984   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:09.765039   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:09.776441   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:09.776462   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:09.776468   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:09.792476   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:09.792485   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:09.807756   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:09.807768   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:09.819835   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:09.819847   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:09.831004   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:09.831022   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:09.846370   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:09.846383   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:09.872374   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:09.872383   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:09.909495   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:09.909507   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:10.016164   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:10.016179   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:10.046241   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:10.046251   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:10.060208   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:10.060222   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:10.064474   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:10.064483   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:10.076080   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:10.076092   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:10.094365   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:10.094379   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:10.106499   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:10.106511   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:10.120419   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:10.120432   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:10.134552   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:10.134563   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:12.647593   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:11.201411   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:11.201463   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:17.649450   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:17.649531   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:17.660632   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:17.660704   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:17.671847   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:17.671926   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:17.682541   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:17.682612   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:17.693650   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:17.693717   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:17.704207   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:17.704276   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:17.714708   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:17.714778   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:17.729684   19545 logs.go:276] 0 containers: []
	W0819 11:49:17.729695   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:17.729753   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:17.740971   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:17.740990   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:17.740995   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:17.755518   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:17.755531   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:17.771109   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:17.771119   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:17.783085   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:17.783096   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:17.801763   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:17.801774   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:17.813969   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:17.813979   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:17.831394   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:17.831406   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:17.848914   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:17.848927   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:17.853561   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:17.853567   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:17.877751   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:17.877770   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:17.892655   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:17.892667   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:17.907730   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:17.907740   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:17.919730   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:17.919742   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:17.945243   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:17.945253   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:17.958414   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:17.958426   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:17.998859   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:17.998873   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:18.037593   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:18.037606   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:16.201759   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:16.201778   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:20.558348   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:21.202502   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:21.202521   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:25.502307   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:25.502567   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:25.529192   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:25.529309   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:25.545605   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:25.545688   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:25.558625   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:25.558699   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:25.570555   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:25.570628   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:25.581183   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:25.581255   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:25.592349   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:25.592424   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:25.602844   19545 logs.go:276] 0 containers: []
	W0819 11:49:25.602858   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:25.602912   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:25.613531   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:25.613552   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:25.613558   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:25.625699   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:25.625710   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:25.629900   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:25.629907   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:25.641314   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:25.641325   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:25.655496   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:25.655507   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:25.670268   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:25.670280   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:25.682846   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:25.682857   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:25.722454   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:25.722468   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:25.757058   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:25.757068   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:25.769043   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:25.769054   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:25.792909   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:25.792919   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:25.804593   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:25.804604   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:25.822621   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:25.822635   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:25.833993   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:25.834004   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:25.849875   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:25.849888   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:25.864054   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:25.864067   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:25.889325   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:25.889344   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:26.145039   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:26.145086   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:28.406079   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:31.145851   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:31.145871   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:36.146899   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:36.146939   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:49:36.494582   19417 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:49:36.498480   19417 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:49:33.408288   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:33.408410   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:33.419490   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:33.419565   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:33.430351   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:33.430417   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:33.440406   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:33.440473   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:33.451152   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:33.451219   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:33.461756   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:33.461826   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:33.476343   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:33.476415   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:33.488645   19545 logs.go:276] 0 containers: []
	W0819 11:49:33.488661   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:33.488720   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:33.499162   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:33.499179   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:33.499186   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:33.513267   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:33.513277   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:33.524112   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:33.524123   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:33.535558   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:33.535568   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:33.540095   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:33.540103   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:33.576521   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:33.576536   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:33.588643   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:33.588657   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:33.600406   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:33.600416   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:33.618036   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:33.618046   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:33.631191   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:33.631203   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:33.668497   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:33.668509   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:33.683443   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:33.683468   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:33.708300   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:33.708317   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:33.723524   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:33.723534   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:33.735023   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:33.735033   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:33.749648   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:33.749659   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:33.764359   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:33.764367   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:36.288507   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:36.505483   19417 addons.go:510] duration metric: took 30.458901041s for enable addons: enabled=[storage-provisioner]
	I0819 11:49:41.290659   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:41.290911   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:41.305794   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:41.305870   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:41.321163   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:41.321225   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:41.331353   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:41.331422   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:41.341886   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:41.341955   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:41.351957   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:41.352024   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:41.362176   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:41.362243   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:41.381150   19545 logs.go:276] 0 containers: []
	W0819 11:49:41.381163   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:41.381220   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:41.393402   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:41.393421   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:41.393426   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:41.407362   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:41.407374   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:41.434731   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:41.434744   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:41.446360   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:41.446378   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:41.471609   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:41.471618   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:41.484691   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:41.484704   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:41.523307   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:41.523316   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:41.539354   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:41.539366   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:41.551116   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:41.551130   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:41.566079   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:41.566089   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:41.577202   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:41.577213   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:41.592527   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:41.592538   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:41.609842   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:41.609856   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:41.624620   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:41.624630   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:41.628879   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:41.628886   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:41.642903   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:41.642913   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:41.653996   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:41.654008   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:41.148014   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:41.148069   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:44.196826   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:46.149638   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:46.149734   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:49.197686   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:49.198052   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:49.230475   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:49.230598   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:49.252413   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:49.252490   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:49.266009   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:49.266090   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:49.277877   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:49.277948   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:49.290538   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:49.290606   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:49.301292   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:49.301358   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:49.311620   19545 logs.go:276] 0 containers: []
	W0819 11:49:49.311630   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:49.311680   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:49.326326   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:49.326346   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:49.326352   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:49.363242   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:49.363256   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:49.388070   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:49.388084   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:49.404241   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:49.404251   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:49.418795   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:49.418806   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:49.431178   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:49.431189   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:49.445883   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:49.445896   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:49.450251   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:49.450262   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:49.464557   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:49.464569   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:49.478962   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:49.478972   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:49.490838   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:49.490849   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:49.503565   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:49.503575   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:49.515218   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:49.515229   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:49.538923   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:49.538937   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:49.550683   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:49.550693   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:49.562295   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:49.562307   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:49.600427   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:49.600440   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:52.126880   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:51.151681   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:51.151734   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:57.129337   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:57.129680   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:57.165775   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:57.165901   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:57.184911   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:57.185007   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:57.201071   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:57.201146   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:57.213783   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:57.213849   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:57.225267   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:57.225334   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:57.236589   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:57.236659   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:57.252615   19545 logs.go:276] 0 containers: []
	W0819 11:49:57.252631   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:57.252689   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:57.271552   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:57.271571   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:57.271578   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:57.284519   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:57.284531   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:57.299228   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:57.299239   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:57.323890   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:57.323901   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:57.338083   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:57.338093   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:57.354020   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:57.354032   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:57.365482   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:57.365493   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:57.400867   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:57.400880   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:57.405080   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:57.405089   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:57.419198   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:57.419210   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:57.431397   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:57.431408   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:57.447172   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:57.447184   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:57.467015   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:57.467026   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:57.481414   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:57.481424   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:57.493287   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:57.493298   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:57.529801   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:57.529813   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:57.542088   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:57.542100   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:56.153949   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:56.153998   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:00.067795   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:01.156242   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:01.156285   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:05.070204   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:05.070753   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:05.106629   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:05.106769   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:05.128110   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:05.128206   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:05.142966   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:05.143042   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:05.155610   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:05.155682   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:05.166637   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:05.166703   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:05.179388   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:05.179455   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:05.189952   19545 logs.go:276] 0 containers: []
	W0819 11:50:05.189970   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:05.190032   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:05.200535   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:05.200553   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:05.200559   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:05.237966   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:05.237978   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:05.272764   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:05.272776   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:05.296154   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:05.296167   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:05.313304   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:05.313315   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:05.326953   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:05.326965   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:05.339271   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:05.339285   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:05.353561   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:05.353571   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:05.378654   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:05.378664   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:05.383361   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:05.383369   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:05.398390   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:05.398400   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:05.423452   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:05.423464   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:05.435079   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:05.435090   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:05.448483   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:05.448496   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:05.460866   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:05.460880   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:05.482202   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:05.482213   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:05.500169   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:05.500180   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:06.158431   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:06.158532   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:06.171066   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:06.171141   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:06.181897   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:06.181957   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:06.192431   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:06.192523   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:06.206840   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:06.206917   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:06.217931   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:06.218001   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:06.231135   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:06.231206   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:06.253362   19417 logs.go:276] 0 containers: []
	W0819 11:50:06.253375   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:06.253434   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:06.264298   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:06.264315   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:06.264320   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:06.302210   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:06.302222   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:06.307233   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:06.307239   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:06.342943   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:06.342956   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:06.358007   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:06.358022   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:06.371600   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:06.371616   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:06.387755   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:06.387769   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:06.402086   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:06.402098   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:06.413256   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:06.413271   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:06.436582   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:06.436589   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:06.449255   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:06.449271   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:06.464301   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:06.464317   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:06.481343   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:06.481354   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:08.013095   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:09.001149   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:13.015528   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:13.015687   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:13.030173   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:13.030259   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:13.041712   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:13.041790   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:13.058435   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:13.058504   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:13.072830   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:13.072898   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:13.083631   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:13.083692   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:13.095341   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:13.095408   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:13.105616   19545 logs.go:276] 0 containers: []
	W0819 11:50:13.105626   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:13.105678   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:13.120672   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:13.120691   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:13.120697   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:13.152922   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:13.152935   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:13.166829   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:13.166839   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:13.180803   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:13.180816   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:13.191760   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:13.191770   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:13.217351   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:13.217362   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:13.228821   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:13.228831   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:13.253510   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:13.253522   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:13.265678   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:13.265692   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:13.269944   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:13.269951   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:13.307594   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:13.307607   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:13.322672   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:13.322681   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:13.339661   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:13.339672   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:13.351173   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:13.351185   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:13.390464   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:13.390479   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:13.409607   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:13.409620   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:13.424144   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:13.424155   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:15.937644   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:14.003455   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:14.003694   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:14.030307   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:14.030410   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:14.047199   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:14.047291   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:14.060521   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:14.060599   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:14.071899   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:14.071971   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:14.082236   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:14.082305   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:14.093742   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:14.093805   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:14.104088   19417 logs.go:276] 0 containers: []
	W0819 11:50:14.104099   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:14.104150   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:14.115037   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:14.115053   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:14.115058   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:14.126894   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:14.126908   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:14.164461   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:14.164472   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:14.169120   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:14.169128   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:14.182555   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:14.182566   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:14.194387   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:14.194398   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:14.206976   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:14.206989   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:14.221816   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:14.221831   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:14.233741   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:14.233752   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:14.259435   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:14.259443   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:14.270417   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:14.270430   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:14.309734   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:14.309752   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:14.324384   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:14.324395   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:16.843671   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:20.940144   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:20.940387   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:20.960853   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:20.960950   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:20.975719   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:20.975798   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:20.989198   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:20.989290   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:20.999987   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:21.000059   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:21.010595   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:21.010670   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:21.021208   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:21.021281   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:21.031490   19545 logs.go:276] 0 containers: []
	W0819 11:50:21.031501   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:21.031553   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:21.041818   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:21.041834   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:21.041842   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:21.061098   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:21.061108   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:21.086203   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:21.086215   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:21.097658   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:21.097669   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:21.109302   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:21.109312   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:21.134378   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:21.134388   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:21.148681   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:21.148694   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:21.165948   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:21.165960   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:21.180056   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:21.180066   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:21.194903   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:21.194912   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:21.206299   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:21.206310   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:21.224380   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:21.224391   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:21.235132   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:21.235144   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:21.247176   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:21.247186   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:21.286735   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:21.286746   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:21.291466   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:21.291474   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:21.327117   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:21.327128   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:21.846068   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:21.846202   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:21.858480   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:21.858546   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:21.869796   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:21.869855   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:21.881513   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:21.881588   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:21.892475   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:21.892541   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:21.903164   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:21.903231   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:21.913917   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:21.913977   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:21.924472   19417 logs.go:276] 0 containers: []
	W0819 11:50:21.924483   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:21.924533   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:21.940257   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:21.940273   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:21.940279   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:21.953756   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:21.953766   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:21.978417   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:21.978430   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:22.015171   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:22.015182   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:22.030205   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:22.030215   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:22.045013   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:22.045026   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:22.058762   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:22.058774   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:22.075247   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:22.075263   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:22.087392   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:22.087415   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:22.091934   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:22.091941   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:22.130125   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:22.130139   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:22.142363   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:22.142378   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:22.154650   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:22.154662   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:23.846649   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:24.674802   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:28.848793   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:28.848962   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:28.861414   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:28.861495   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:28.872180   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:28.872253   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:28.882932   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:28.883006   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:28.893275   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:28.893346   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:28.909992   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:28.910058   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:28.920466   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:28.920537   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:28.930782   19545 logs.go:276] 0 containers: []
	W0819 11:50:28.930794   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:28.930850   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:28.941366   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:28.941385   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:28.941392   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:28.980123   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:28.980134   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:29.004762   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:29.004773   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:29.018944   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:29.018956   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:29.034683   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:29.034696   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:29.051807   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:29.051820   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:29.071119   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:29.071130   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:29.086581   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:29.086592   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:29.097887   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:29.097900   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:29.102413   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:29.102422   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:29.139393   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:29.139407   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:29.153896   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:29.153908   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:29.165482   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:29.165493   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:29.177050   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:29.177061   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:29.192416   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:29.192428   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:29.206169   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:29.206181   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:29.231557   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:29.231564   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:31.745011   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:29.676396   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:29.676588   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:29.703555   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:29.703677   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:29.721228   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:29.721313   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:29.735228   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:29.735297   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:29.746933   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:29.746999   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:29.758101   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:29.758170   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:29.769471   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:29.769536   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:29.780211   19417 logs.go:276] 0 containers: []
	W0819 11:50:29.780226   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:29.780286   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:29.791209   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:29.791224   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:29.791229   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:29.828222   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:29.828234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:29.841309   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:29.841322   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:29.859775   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:29.859793   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:29.872323   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:29.872336   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:29.895548   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:29.895557   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:29.911523   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:29.911534   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:29.923012   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:29.923025   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:29.927647   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:29.927657   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:29.963465   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:29.963479   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:29.978222   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:29.978235   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:29.992592   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:29.992601   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:30.004361   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:30.004375   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:32.521495   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:36.747196   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:36.747418   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:36.762778   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:36.762869   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:36.774524   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:36.774601   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:36.785521   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:36.785592   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:36.796134   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:36.796199   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:36.810410   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:36.810468   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:36.820784   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:36.820852   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:36.832473   19545 logs.go:276] 0 containers: []
	W0819 11:50:36.832486   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:36.832548   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:36.843231   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:36.843249   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:36.843256   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:36.878362   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:36.878374   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:36.893178   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:36.893188   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:36.897367   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:36.897373   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:36.921277   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:36.921286   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:36.932528   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:36.932542   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:36.946251   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:36.946263   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:36.973051   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:36.973062   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:36.986555   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:36.986565   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:37.001081   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:37.001092   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:37.012851   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:37.012865   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:37.024451   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:37.024464   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:37.063183   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:37.063192   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:37.074493   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:37.074507   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:37.093094   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:37.093104   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:37.110971   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:37.110984   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:37.127087   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:37.127102   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:37.523806   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:37.524002   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:37.544601   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:37.544695   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:37.560479   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:37.560550   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:37.572091   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:37.572165   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:37.583871   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:37.583950   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:37.594562   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:37.594631   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:37.605276   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:37.605339   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:37.616305   19417 logs.go:276] 0 containers: []
	W0819 11:50:37.616317   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:37.616378   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:37.627167   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:37.627183   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:37.627189   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:37.642248   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:37.642261   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:37.654445   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:37.654457   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:37.693676   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:37.693685   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:37.698676   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:37.698682   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:37.742556   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:37.742574   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:37.758663   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:37.758676   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:37.773477   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:37.773488   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:37.785680   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:37.785694   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:37.797550   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:37.797564   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:37.814565   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:37.814575   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:37.835273   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:37.835285   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:37.848067   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:37.848080   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:39.647147   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:40.373462   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:44.649380   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:44.649533   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:44.661423   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:44.661494   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:44.672099   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:44.672173   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:44.682939   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:44.683009   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:44.697111   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:44.697182   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:44.715128   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:44.715192   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:44.725845   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:44.725915   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:44.736172   19545 logs.go:276] 0 containers: []
	W0819 11:50:44.736183   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:44.736235   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:44.746764   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:44.746783   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:44.746793   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:44.786548   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:44.786563   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:44.810834   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:44.810845   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:44.835905   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:44.835915   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:44.847317   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:44.847336   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:44.886311   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:44.886322   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:44.900597   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:44.900611   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:44.911966   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:44.911981   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:44.923284   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:44.923295   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:44.941062   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:44.941073   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:44.952631   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:44.952642   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:44.966495   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:44.966510   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:44.971150   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:44.971156   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:44.988992   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:44.989004   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:45.000476   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:45.000488   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:45.015518   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:45.015531   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:45.029725   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:45.029735   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:47.546129   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:45.375807   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:45.376023   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:45.401944   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:45.402046   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:45.416526   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:45.416600   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:45.428377   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:45.428440   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:45.439393   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:45.439461   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:45.451259   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:45.451328   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:45.462436   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:45.462499   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:45.473462   19417 logs.go:276] 0 containers: []
	W0819 11:50:45.473476   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:45.473531   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:45.484720   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:45.484738   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:45.484743   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:45.489477   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:45.489484   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:45.525413   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:45.525430   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:45.540525   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:45.540536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:45.552892   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:45.552903   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:45.565179   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:45.565189   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:45.579131   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:45.579144   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:45.602809   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:45.602821   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:45.641553   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:45.641570   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:45.656427   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:45.656440   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:45.669671   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:45.669685   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:45.685547   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:45.685559   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:45.705393   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:45.705404   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:48.220391   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:52.546939   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:52.547339   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:52.581493   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:52.581636   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:52.603106   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:52.603205   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:52.617202   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:52.617283   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:52.633533   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:52.633607   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:52.644015   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:52.644084   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:52.661692   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:52.661758   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:52.671865   19545 logs.go:276] 0 containers: []
	W0819 11:50:52.671877   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:52.671936   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:52.687441   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:52.687460   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:52.687465   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:52.726011   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:52.726023   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:52.740339   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:52.740350   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:52.754912   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:52.754926   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:52.769868   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:52.769879   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:52.784014   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:52.784025   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:52.797777   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:52.797787   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:52.809503   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:52.809516   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:52.820649   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:52.820660   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:52.843366   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:52.843377   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:52.857129   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:52.857139   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:52.882269   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:52.882279   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:52.918943   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:52.918954   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:52.931476   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:52.931491   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:52.948517   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:52.948529   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:52.965098   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:52.965108   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:52.989390   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:52.989397   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:53.221701   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:53.221828   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:53.235857   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:50:53.235930   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:53.247988   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:50:53.248061   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:53.260534   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:50:53.260604   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:53.271918   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:50:53.271987   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:53.284339   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:50:53.284415   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:53.295823   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:50:53.295900   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:53.306523   19417 logs.go:276] 0 containers: []
	W0819 11:50:53.306533   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:53.306584   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:53.318079   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:50:53.318094   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:53.318099   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:53.357053   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:53.357067   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:53.392409   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:50:53.392420   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:50:53.404140   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:50:53.404151   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:50:53.421778   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:50:53.421788   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:50:53.433287   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:50:53.433297   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:53.455262   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:53.455273   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:53.479838   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:53.479847   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:53.484351   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:50:53.484358   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:50:53.499066   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:50:53.499076   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:50:53.513547   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:50:53.513558   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:50:53.525078   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:50:53.525088   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:50:53.537082   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:50:53.537093   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:50:55.495805   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:56.053984   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:00.496796   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:00.497210   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:00.527346   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:00.527472   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:00.545573   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:00.545669   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:00.559942   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:00.560013   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:00.572195   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:00.572278   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:00.582795   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:00.582868   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:00.595966   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:00.596037   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:00.606372   19545 logs.go:276] 0 containers: []
	W0819 11:51:00.606384   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:00.606444   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:00.617056   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:00.617073   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:00.617078   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:00.632499   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:00.632511   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:00.645264   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:00.645273   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:00.657277   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:00.657289   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:00.669617   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:00.669629   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:00.694511   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:00.694526   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:00.714435   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:00.714444   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:00.727376   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:00.727388   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:00.745520   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:00.745533   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:00.759335   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:00.759344   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:00.796601   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:00.796610   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:00.831765   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:00.831777   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:00.846510   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:00.846521   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:00.858412   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:00.858423   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:00.881739   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:00.881747   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:00.886160   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:00.886167   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:00.904241   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:00.904251   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:01.056231   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:01.056364   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:01.077623   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:01.077716   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:01.092072   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:01.092142   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:01.103710   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:01.103781   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:01.119391   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:01.119468   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:01.130171   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:01.130239   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:01.141863   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:01.141927   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:01.152050   19417 logs.go:276] 0 containers: []
	W0819 11:51:01.152060   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:01.152113   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:01.168325   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:01.168340   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:01.168345   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:01.179989   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:01.180000   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:01.191885   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:01.191896   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:01.227166   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:01.227178   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:01.231436   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:01.231442   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:01.246179   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:01.246192   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:01.268786   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:01.268799   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:01.280841   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:01.280854   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:01.295466   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:01.295476   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:01.307119   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:01.307130   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:01.324865   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:01.324874   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:01.362116   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:01.362128   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:01.384875   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:01.384881   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:03.418043   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:03.897917   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:08.420356   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:08.420737   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:08.454745   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:08.454881   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:08.472640   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:08.472722   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:08.486957   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:08.487030   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:08.499468   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:08.499532   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:08.510149   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:08.510215   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:08.525288   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:08.525350   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:08.542279   19545 logs.go:276] 0 containers: []
	W0819 11:51:08.542295   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:08.542353   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:08.552957   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:08.552974   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:08.552979   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:08.593552   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:08.593567   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:08.605242   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:08.605254   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:08.624269   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:08.624279   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:08.636072   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:08.636086   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:08.660220   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:08.660232   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:08.671064   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:08.671076   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:08.684018   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:08.684031   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:08.701029   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:08.701047   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:08.717842   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:08.717855   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:08.732445   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:08.732455   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:08.736425   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:08.736431   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:08.781263   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:08.781277   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:08.799490   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:08.799503   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:08.825972   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:08.825985   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:08.841287   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:08.841302   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:08.853776   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:08.853786   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:11.365700   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:08.900080   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:08.900187   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:08.910871   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:08.910941   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:08.920939   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:08.921015   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:08.931334   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:08.931405   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:08.941655   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:08.941720   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:08.951820   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:08.951886   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:08.962204   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:08.962264   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:08.971796   19417 logs.go:276] 0 containers: []
	W0819 11:51:08.971808   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:08.971867   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:08.982648   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:08.982662   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:08.982668   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:08.994036   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:08.994048   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:09.008865   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:09.008876   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:09.026452   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:09.026464   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:09.065316   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:09.065329   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:09.070336   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:09.070345   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:09.105328   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:09.105339   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:09.124385   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:09.124395   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:09.138899   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:09.138912   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:09.150493   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:09.150504   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:09.174014   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:09.174026   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:09.185390   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:09.185404   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:09.197240   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:09.197254   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:11.711698   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:16.368181   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:16.368597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:16.408193   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:16.408335   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:16.434724   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:16.434841   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:16.450189   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:16.450258   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:16.464441   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:16.464516   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:16.474848   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:16.474914   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:16.485901   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:16.485972   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:16.496461   19545 logs.go:276] 0 containers: []
	W0819 11:51:16.496474   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:16.496530   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:16.508160   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:16.508178   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:16.508184   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:16.531730   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:16.531740   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:16.543408   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:16.543418   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:16.557235   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:16.557246   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:16.573582   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:16.573593   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:16.589556   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:16.589571   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:16.602233   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:16.602244   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:16.640470   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:16.640482   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:16.655082   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:16.655094   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:16.673773   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:16.673783   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:16.678583   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:16.678589   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:16.715001   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:16.715011   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:16.741865   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:16.741878   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:16.754530   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:16.754546   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:16.769966   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:16.769978   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:16.783012   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:16.783023   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:16.798315   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:16.798332   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:16.712929   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:16.713055   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:16.725330   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:16.725407   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:16.737375   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:16.737455   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:16.749350   19417 logs.go:276] 2 containers: [db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:16.749419   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:16.760832   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:16.760900   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:16.773079   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:16.773150   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:16.785782   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:16.785855   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:16.800270   19417 logs.go:276] 0 containers: []
	W0819 11:51:16.800285   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:16.800349   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:16.813726   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:16.813742   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:16.813748   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:16.828725   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:16.828740   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:16.845483   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:16.845497   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:16.856966   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:16.856978   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:16.868484   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:16.868499   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:16.873226   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:16.874078   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:16.908730   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:16.908744   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:16.923904   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:16.923917   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:16.935890   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:16.935905   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:16.950785   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:16.950796   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:16.969022   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:16.969032   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:16.993085   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:16.993102   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:17.004979   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:17.004993   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:19.318885   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:19.544176   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:24.321152   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:24.321346   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:24.341525   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:24.341627   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:24.357058   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:24.357135   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:24.369525   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:24.369590   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:24.380966   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:24.381036   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:24.391395   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:24.391461   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:24.402432   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:24.402494   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:24.412407   19545 logs.go:276] 0 containers: []
	W0819 11:51:24.412419   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:24.412483   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:24.422783   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:24.422799   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:24.422805   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:24.435060   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:24.435071   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:24.446641   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:24.446653   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:24.483128   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:24.483140   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:24.497835   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:24.497844   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:24.512641   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:24.512651   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:24.530703   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:24.530714   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:24.545265   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:24.545277   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:24.549994   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:24.550004   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:24.566514   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:24.566527   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:24.578686   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:24.578699   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:24.597525   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:24.597539   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:24.624731   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:24.624745   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:24.639946   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:24.639954   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:24.652826   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:24.652837   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:24.678227   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:24.678245   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:24.691393   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:24.691410   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:27.234801   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:24.546434   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:24.546519   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:24.557760   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:24.557834   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:24.568996   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:24.569098   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:24.581301   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:24.581378   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:24.592929   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:24.592998   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:24.604935   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:24.605007   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:24.616093   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:24.616160   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:24.627277   19417 logs.go:276] 0 containers: []
	W0819 11:51:24.627289   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:24.627351   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:24.639092   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:24.639113   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:24.639119   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:24.654877   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:24.654886   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:24.669268   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:24.669280   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:24.681534   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:24.681546   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:24.700563   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:24.700576   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:24.716396   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:24.716406   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:24.743172   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:24.743181   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:24.747557   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:24.747564   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:24.761657   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:24.761668   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:24.778778   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:24.778789   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:24.790182   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:24.790193   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:24.825443   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:24.825453   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:24.837242   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:24.837255   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:24.848461   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:24.848473   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:24.888907   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:24.888925   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:27.403228   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:32.237354   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:32.237611   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:32.261161   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:32.261256   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:32.276380   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:32.276467   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:32.288439   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:32.288514   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:32.299172   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:32.299252   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:32.309569   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:32.309636   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:32.319832   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:32.319904   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:32.329711   19545 logs.go:276] 0 containers: []
	W0819 11:51:32.329722   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:32.329781   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:32.340057   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:32.340076   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:32.340082   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:32.374724   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:32.374735   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:32.388640   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:32.388655   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:32.426779   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:32.426795   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:32.431318   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:32.431331   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:32.444323   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:32.444334   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:32.457021   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:32.457036   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:32.481030   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:32.481049   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:32.495838   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:32.495857   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:32.511696   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:32.511705   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:32.523983   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:32.523996   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:32.544629   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:32.544644   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:32.557296   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:32.557307   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:32.569969   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:32.569982   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:32.600740   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:32.600759   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:32.613685   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:32.613697   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:32.638663   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:32.638673   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:32.405704   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:32.405830   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:32.416998   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:32.417063   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:32.431447   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:32.431514   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:32.449651   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:32.449725   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:32.461324   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:32.461443   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:32.472556   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:32.472623   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:32.484992   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:32.485062   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:32.496138   19417 logs.go:276] 0 containers: []
	W0819 11:51:32.496149   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:32.496210   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:32.508522   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:32.508541   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:32.508546   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:32.521709   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:32.521720   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:32.534020   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:32.534034   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:32.548513   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:32.548524   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:32.590258   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:32.590274   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:32.607250   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:32.607263   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:32.620108   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:32.620128   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:32.636185   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:32.636195   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:32.649663   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:32.649674   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:32.673862   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:32.673875   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:32.689662   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:32.689672   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:32.726526   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:32.726536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:32.739033   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:32.739044   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:32.761074   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:32.761088   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:32.774173   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:32.774185   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:35.156569   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:35.280740   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:40.158901   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:40.159278   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:40.192851   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:40.192977   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:40.213641   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:40.213756   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:40.228046   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:40.228123   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:40.244136   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:40.244208   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:40.254451   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:40.254516   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:40.265858   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:40.265939   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:40.276530   19545 logs.go:276] 0 containers: []
	W0819 11:51:40.276541   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:40.276597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:40.287813   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:40.287831   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:40.287837   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:40.304249   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:40.304265   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:40.316519   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:40.316532   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:40.329225   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:40.329243   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:40.368344   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:40.368354   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:40.394984   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:40.394999   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:40.407734   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:40.407746   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:40.448054   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:40.448067   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:40.463126   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:40.463138   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:40.479116   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:40.479163   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:40.494883   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:40.494896   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:40.513813   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:40.513826   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:40.538142   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:40.538155   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:40.562049   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:40.562057   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:40.566975   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:40.566987   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:40.580241   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:40.580253   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:40.594806   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:40.594819   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:40.282877   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:40.282968   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:40.294357   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:40.294428   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:40.305715   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:40.305779   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:40.317445   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:40.317519   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:40.332740   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:40.332816   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:40.344367   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:40.344437   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:40.355559   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:40.355627   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:40.366437   19417 logs.go:276] 0 containers: []
	W0819 11:51:40.366449   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:40.366509   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:40.382283   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:40.382304   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:40.382310   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:40.397748   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:40.397760   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:40.411071   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:40.411083   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:40.450293   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:40.450307   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:40.487134   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:40.487146   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:40.502579   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:40.502591   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:40.516666   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:40.516677   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:40.529672   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:40.529686   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:40.542346   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:40.542357   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:40.555985   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:40.555997   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:40.561652   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:40.561661   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:40.574524   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:40.574537   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:40.599238   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:40.599250   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:40.624261   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:40.624272   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:40.640002   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:40.640013   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:43.157874   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:43.113070   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:48.159944   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:48.160087   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:48.176243   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:48.176325   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:48.190781   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:48.190856   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:48.202281   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:48.202353   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:48.213849   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:48.213920   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:48.225224   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:48.225296   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:48.236609   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:48.236679   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:48.247967   19417 logs.go:276] 0 containers: []
	W0819 11:51:48.247979   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:48.248035   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:48.259501   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:48.259521   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:48.259527   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:48.278429   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:48.278444   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:48.291479   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:48.291492   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:48.312126   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:48.312136   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:48.324909   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:48.324921   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:48.337514   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:48.337525   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:48.351718   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:48.351729   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:48.364174   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:48.364185   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:48.381238   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:48.381250   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:48.394840   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:48.394850   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:48.433356   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:48.433375   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:48.458227   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:48.458248   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:48.495401   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:48.495412   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:48.510979   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:48.510990   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:48.524773   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:48.524785   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:48.115293   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:48.115656   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:48.147180   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:48.147317   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:48.166509   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:48.166602   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:48.182045   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:48.182122   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:48.195305   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:48.195375   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:48.207739   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:48.207812   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:48.219438   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:48.219509   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:48.239916   19545 logs.go:276] 0 containers: []
	W0819 11:51:48.239928   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:48.239986   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:48.255584   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:48.255603   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:48.255609   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:48.268302   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:48.268314   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:48.284169   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:48.284183   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:48.308736   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:48.308751   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:48.338381   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:48.338393   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:48.357898   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:48.357910   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:48.373147   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:48.373158   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:48.385571   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:48.385582   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:48.424817   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:48.424829   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:48.447028   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:48.447040   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:48.465875   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:48.465887   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:48.478502   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:48.478515   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:48.496436   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:48.496445   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:48.501305   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:48.501316   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:48.539190   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:48.539202   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:48.551438   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:48.551452   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:48.563799   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:48.563810   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:51.076119   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:51.032136   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:56.078489   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:56.078656   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:56.112919   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:56.113004   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:56.129090   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:56.129159   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:56.148985   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:56.149053   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:56.161392   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:56.161492   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:56.183322   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:56.183386   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:56.199604   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:56.199669   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:56.211044   19545 logs.go:276] 0 containers: []
	W0819 11:51:56.211054   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:56.211112   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:56.223509   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:56.223528   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:56.223534   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:56.261640   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:56.261653   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:56.276884   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:56.276896   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:56.289298   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:56.289310   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:56.303493   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:56.303501   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:56.316128   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:56.316144   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:56.330564   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:56.330576   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:56.371520   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:56.371537   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:56.388278   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:56.388292   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:56.400386   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:56.400400   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:56.418680   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:56.418698   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:56.434925   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:56.434943   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:56.439573   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:56.439582   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:56.454354   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:56.454366   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:56.480747   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:56.480766   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:56.492211   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:56.492223   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:56.507653   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:56.507664   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:56.034913   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:56.035342   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:56.073346   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:51:56.073474   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:56.093841   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:51:56.093909   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:56.109987   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:51:56.110052   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:56.124853   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:51:56.124924   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:56.142197   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:51:56.142268   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:56.155546   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:51:56.155611   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:56.176819   19417 logs.go:276] 0 containers: []
	W0819 11:51:56.176830   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:56.176888   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:56.188266   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:51:56.188283   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:51:56.188288   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:51:56.201699   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:51:56.201709   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:51:56.214028   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:51:56.214038   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:51:56.229353   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:51:56.229365   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:51:56.241987   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:56.241997   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:56.246765   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:56.246775   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:56.286485   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:51:56.286498   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:51:56.302070   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:56.302081   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:56.327269   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:51:56.327292   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:51:56.340684   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:51:56.340698   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:56.354771   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:56.354785   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:56.395607   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:51:56.395623   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:51:56.411426   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:51:56.411438   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:51:56.424708   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:51:56.424723   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:51:56.437710   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:51:56.437720   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:51:59.031486   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:58.963353   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:04.033212   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:04.033325   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:04.048797   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:52:04.048871   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:04.060627   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:52:04.060701   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:04.073013   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:52:04.073091   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:04.084720   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:52:04.084790   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:04.095825   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:52:04.095896   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:04.111442   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:52:04.111512   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:04.122987   19545 logs.go:276] 0 containers: []
	W0819 11:52:04.122999   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:04.123058   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:04.134055   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:52:04.134072   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:04.134078   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:04.138716   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:52:04.138721   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:52:04.154361   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:52:04.154369   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:52:04.169271   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:52:04.169282   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:52:04.183237   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:52:04.183250   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:52:04.199112   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:52:04.199129   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:52:04.212532   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:04.212544   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:04.235239   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:52:04.235248   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:04.248487   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:04.248503   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:04.289896   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:52:04.289909   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:52:04.323526   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:52:04.323544   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:52:04.338910   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:52:04.338924   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:52:04.350403   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:04.350415   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:04.384901   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:52:04.384911   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:52:04.400534   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:52:04.400544   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:52:04.413023   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:52:04.413033   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:52:04.430539   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:52:04.430550   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:52:06.942291   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:03.965524   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:03.965686   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:03.980277   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:03.980356   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:03.991262   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:03.991335   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:04.002211   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:04.002284   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:04.012661   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:04.012726   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:04.023302   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:04.023369   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:04.033614   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:04.033644   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:04.044196   19417 logs.go:276] 0 containers: []
	W0819 11:52:04.044209   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:04.044271   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:04.055938   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:04.055955   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:04.055960   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:04.068344   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:04.068356   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:04.081341   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:04.081353   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:04.097396   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:04.097407   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:04.138236   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:04.138247   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:04.154141   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:04.154154   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:04.169829   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:04.169838   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:04.188200   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:04.188212   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:04.200894   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:04.200904   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:04.227011   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:04.227021   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:04.232386   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:04.232393   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:04.270538   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:04.270549   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:04.283538   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:04.283549   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:04.301841   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:04.301860   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:04.316222   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:04.316234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:06.832471   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:11.944681   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:11.944714   19545 kubeadm.go:597] duration metric: took 4m4.119914s to restartPrimaryControlPlane
	W0819 11:52:11.944746   19545 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:52:11.944760   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:52:12.957291   19545 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.012541916s)
	I0819 11:52:12.957349   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:52:12.962734   19545 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:52:12.965726   19545 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:52:12.968455   19545 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:52:12.968461   19545 kubeadm.go:157] found existing configuration files:
	
	I0819 11:52:12.968487   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf
	I0819 11:52:12.971045   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:52:12.971073   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:52:12.973646   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf
	I0819 11:52:12.976909   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:52:12.976937   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:52:12.980527   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf
	I0819 11:52:12.983635   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:52:12.983660   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:52:12.986261   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf
	I0819 11:52:12.989136   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:52:12.989160   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:52:12.992542   19545 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:52:13.011535   19545 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:52:13.011565   19545 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:52:13.060079   19545 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:52:13.060193   19545 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:52:13.060251   19545 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:52:13.109044   19545 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:52:13.117205   19545 out.go:235]   - Generating certificates and keys ...
	I0819 11:52:13.117239   19545 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:52:13.117272   19545 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:52:13.117309   19545 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:52:13.117336   19545 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:52:13.117375   19545 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:52:13.117401   19545 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:52:13.117436   19545 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:52:13.117467   19545 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:52:13.117502   19545 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:52:13.117542   19545 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:52:13.117566   19545 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:52:13.117598   19545 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:52:13.167194   19545 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:52:13.247252   19545 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:52:13.304243   19545 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:52:13.371017   19545 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:52:13.399052   19545 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:52:13.399468   19545 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:52:13.399496   19545 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:52:13.486404   19545 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:52:11.834822   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:11.835185   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:11.866757   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:11.866883   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:11.885497   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:11.885586   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:11.903512   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:11.903594   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:11.915091   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:11.915159   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:11.925317   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:11.925387   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:11.935609   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:11.935672   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:11.945602   19417 logs.go:276] 0 containers: []
	W0819 11:52:11.945610   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:11.945662   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:11.957441   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:11.957460   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:11.957464   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:11.998589   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:11.998606   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:12.011435   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:12.011446   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:12.024588   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:12.024599   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:12.037140   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:12.037152   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:12.049234   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:12.049246   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:12.064229   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:12.064242   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:12.076209   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:12.076223   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:12.089011   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:12.089025   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:12.107678   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:12.107696   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:12.136037   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:12.136053   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:12.141541   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:12.141560   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:12.182991   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:12.183006   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:12.197637   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:12.197647   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:12.214664   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:12.214676   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:13.493583   19545 out.go:235]   - Booting up control plane ...
	I0819 11:52:13.493639   19545 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:52:13.493684   19545 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:52:13.493719   19545 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:52:13.493757   19545 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:52:13.493841   19545 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:52:17.991648   19545 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501221 seconds
	I0819 11:52:17.991712   19545 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:52:17.996796   19545 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:52:14.735586   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:18.521560   19545 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:52:18.521816   19545 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-604000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:52:19.027734   19545 kubeadm.go:310] [bootstrap-token] Using token: l3au5v.8norsn0i1fxpzhal
	I0819 11:52:19.033450   19545 out.go:235]   - Configuring RBAC rules ...
	I0819 11:52:19.033518   19545 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:52:19.033564   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:52:19.035471   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:52:19.037169   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:52:19.038146   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:52:19.039036   19545 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:52:19.042528   19545 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:52:19.214216   19545 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:52:19.431859   19545 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:52:19.432499   19545 kubeadm.go:310] 
	I0819 11:52:19.432534   19545 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:52:19.432536   19545 kubeadm.go:310] 
	I0819 11:52:19.432585   19545 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:52:19.432591   19545 kubeadm.go:310] 
	I0819 11:52:19.432604   19545 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:52:19.432663   19545 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:52:19.432697   19545 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:52:19.432704   19545 kubeadm.go:310] 
	I0819 11:52:19.432731   19545 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:52:19.432734   19545 kubeadm.go:310] 
	I0819 11:52:19.432766   19545 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:52:19.432768   19545 kubeadm.go:310] 
	I0819 11:52:19.432797   19545 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:52:19.432839   19545 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:52:19.432881   19545 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:52:19.432887   19545 kubeadm.go:310] 
	I0819 11:52:19.432944   19545 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:52:19.432985   19545 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:52:19.432990   19545 kubeadm.go:310] 
	I0819 11:52:19.433031   19545 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l3au5v.8norsn0i1fxpzhal \
	I0819 11:52:19.433082   19545 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae \
	I0819 11:52:19.433104   19545 kubeadm.go:310] 	--control-plane 
	I0819 11:52:19.433110   19545 kubeadm.go:310] 
	I0819 11:52:19.433156   19545 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:52:19.433160   19545 kubeadm.go:310] 
	I0819 11:52:19.433205   19545 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l3au5v.8norsn0i1fxpzhal \
	I0819 11:52:19.433272   19545 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae 
	I0819 11:52:19.433365   19545 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:52:19.433466   19545 cni.go:84] Creating CNI manager for ""
	I0819 11:52:19.433475   19545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:52:19.440406   19545 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:52:19.444564   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:52:19.447882   19545 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:52:19.452644   19545 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:52:19.452704   19545 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:52:19.452714   19545 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-604000 minikube.k8s.io/updated_at=2024_08_19T11_52_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=stopped-upgrade-604000 minikube.k8s.io/primary=true
	I0819 11:52:19.455959   19545 ops.go:34] apiserver oom_adj: -16
	I0819 11:52:19.502376   19545 kubeadm.go:1113] duration metric: took 49.714375ms to wait for elevateKubeSystemPrivileges
	I0819 11:52:19.502391   19545 kubeadm.go:394] duration metric: took 4m11.691038125s to StartCluster
	I0819 11:52:19.502402   19545 settings.go:142] acquiring lock: {Name:mkd10d56bae48d75d53289d9920be83758fb5ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:52:19.502490   19545 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:52:19.502919   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:52:19.503130   19545 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:52:19.503143   19545 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:52:19.503177   19545 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-604000"
	I0819 11:52:19.503181   19545 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-604000"
	I0819 11:52:19.503191   19545 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-604000"
	W0819 11:52:19.503194   19545 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:52:19.503195   19545 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-604000"
	I0819 11:52:19.503208   19545 host.go:66] Checking if "stopped-upgrade-604000" exists ...
	I0819 11:52:19.503224   19545 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:52:19.506401   19545 out.go:177] * Verifying Kubernetes components...
	I0819 11:52:19.507025   19545 kapi.go:59] client config for stopped-upgrade-604000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105aed990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:52:19.509844   19545 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-604000"
	W0819 11:52:19.509848   19545 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:52:19.509857   19545 host.go:66] Checking if "stopped-upgrade-604000" exists ...
	I0819 11:52:19.510403   19545 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:52:19.510408   19545 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:52:19.510413   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:52:19.513243   19545 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:52:19.517371   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:52:19.521473   19545 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:52:19.521478   19545 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:52:19.521484   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:52:19.589570   19545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:52:19.594743   19545 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:52:19.594793   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:52:19.598683   19545 api_server.go:72] duration metric: took 95.544958ms to wait for apiserver process to appear ...
	I0819 11:52:19.598691   19545 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:52:19.598699   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:19.631027   19545 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:52:19.647291   19545 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:52:20.017350   19545 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:52:20.017363   19545 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:52:19.737688   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:19.737795   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:19.748716   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:19.748788   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:19.760373   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:19.760482   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:19.771775   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:19.771845   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:19.783919   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:19.783985   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:19.797740   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:19.797810   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:19.808744   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:19.808813   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:19.819651   19417 logs.go:276] 0 containers: []
	W0819 11:52:19.819663   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:19.819724   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:19.834409   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:19.834426   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:19.834433   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:19.849747   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:19.849760   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:19.865270   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:19.865283   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:19.904142   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:19.904159   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:19.920381   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:19.920397   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:19.937104   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:19.937115   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:19.949283   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:19.949295   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:19.977941   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:19.977961   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:19.991540   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:19.991552   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:20.006022   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:20.006037   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:20.011591   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:20.011605   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:20.051318   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:20.051332   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:20.063511   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:20.063522   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:20.076235   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:20.076249   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:20.093940   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:20.093951   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:22.607338   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:24.600739   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:24.600784   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:27.609670   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:27.609836   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:27.628941   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:27.629049   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:27.644455   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:27.644548   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:27.660129   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:27.660200   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:27.670390   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:27.670481   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:27.680946   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:27.681020   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:27.691844   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:27.691912   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:27.709365   19417 logs.go:276] 0 containers: []
	W0819 11:52:27.709376   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:27.709432   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:27.720612   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:27.720631   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:27.720637   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:27.737885   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:27.737896   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:27.749904   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:27.749914   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:27.765098   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:27.765109   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:27.777503   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:27.777513   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:27.816222   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:27.816234   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:27.828523   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:27.828536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:27.843437   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:27.843448   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:27.855653   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:27.855664   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:27.870031   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:27.870046   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:27.881953   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:27.881964   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:27.894547   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:27.894557   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:27.909209   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:27.909219   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:27.931858   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:27.931867   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:27.969308   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:27.969321   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:29.600946   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:29.600975   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:30.475601   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:34.601206   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:34.601253   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:35.477746   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:35.477858   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:35.489751   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:35.489829   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:35.500344   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:35.500411   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:35.511092   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:35.511158   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:35.524620   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:35.524679   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:35.535381   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:35.535452   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:35.548159   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:35.548232   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:35.558720   19417 logs.go:276] 0 containers: []
	W0819 11:52:35.558731   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:35.558791   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:35.571181   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:35.571198   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:35.571204   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:35.583090   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:35.583101   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:35.594871   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:35.594882   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:35.606776   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:35.606788   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:35.642523   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:35.642536   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:35.656989   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:35.657001   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:35.671245   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:35.671259   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:35.696314   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:35.696322   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:35.700644   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:35.700652   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:35.712890   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:35.712900   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:35.731990   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:35.732002   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:35.743777   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:35.743788   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:35.759138   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:35.759154   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:35.799153   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:35.799160   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:35.811243   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:35.811254   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:38.325444   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:39.601873   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:39.601913   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:43.327690   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:43.327952   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:43.352450   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:43.352566   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:43.368581   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:43.368663   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:43.381891   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:43.381960   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:43.397452   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:43.397531   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:43.421839   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:43.421927   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:43.444123   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:43.444198   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:43.459851   19417 logs.go:276] 0 containers: []
	W0819 11:52:43.459867   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:43.459937   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:43.474680   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:43.474700   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:43.474705   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:43.488344   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:43.488356   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:43.500944   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:43.500958   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:43.513273   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:43.513285   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:43.518064   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:43.518073   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:43.551989   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:43.552005   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:43.566375   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:43.566386   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:43.582584   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:43.582595   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:44.602554   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:44.602599   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:43.598319   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:43.598333   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:43.613670   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:43.613690   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:43.635155   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:43.635170   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:43.659928   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:43.659941   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:43.671552   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:43.671567   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:43.710898   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:43.710912   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:43.726534   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:43.726547   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:46.239579   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:49.603359   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:49.603400   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:52:50.019129   19545 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:52:50.024518   19545 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:52:50.035396   19545 addons.go:510] duration metric: took 30.532908s for enable addons: enabled=[storage-provisioner]
	I0819 11:52:51.241952   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:51.242332   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:51.279602   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:51.279733   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:51.299227   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:51.299321   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:51.313946   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:51.314030   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:51.326446   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:51.326521   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:51.339398   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:51.339464   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:51.350085   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:51.350151   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:51.360860   19417 logs.go:276] 0 containers: []
	W0819 11:52:51.360873   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:51.360929   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:51.377680   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:51.377695   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:51.377700   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:51.382253   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:51.382260   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:51.394269   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:51.394281   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:51.407085   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:51.407099   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:51.435343   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:51.435352   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:51.461499   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:51.461511   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:51.498072   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:51.498083   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:51.513286   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:51.513299   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:51.525488   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:51.525499   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:51.540752   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:51.540763   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:51.552762   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:51.552775   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:51.589913   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:51.589923   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:51.603848   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:51.603859   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:51.616039   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:51.616052   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:51.627651   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:51.627663   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:54.604470   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:54.604566   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:54.139292   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:59.606453   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:59.606475   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:59.139607   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:59.139824   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:59.162135   19417 logs.go:276] 1 containers: [45f3dc60aedc]
	I0819 11:52:59.162248   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:59.176620   19417 logs.go:276] 1 containers: [4122edd3dc51]
	I0819 11:52:59.176692   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:59.188616   19417 logs.go:276] 4 containers: [ffe7423d8fb8 eb0595d1949c db6dffb8b3ea 3d2d44fde489]
	I0819 11:52:59.188693   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:59.203579   19417 logs.go:276] 1 containers: [2cfe471024b0]
	I0819 11:52:59.203638   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:59.213847   19417 logs.go:276] 1 containers: [26ea981cbd0c]
	I0819 11:52:59.213917   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:59.224365   19417 logs.go:276] 1 containers: [b3325243ca65]
	I0819 11:52:59.224441   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:59.237139   19417 logs.go:276] 0 containers: []
	W0819 11:52:59.237158   19417 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:59.237221   19417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:59.248396   19417 logs.go:276] 1 containers: [74c6a87ae3c8]
	I0819 11:52:59.248413   19417 logs.go:123] Gathering logs for kube-apiserver [45f3dc60aedc] ...
	I0819 11:52:59.248419   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45f3dc60aedc"
	I0819 11:52:59.262584   19417 logs.go:123] Gathering logs for storage-provisioner [74c6a87ae3c8] ...
	I0819 11:52:59.262595   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74c6a87ae3c8"
	I0819 11:52:59.274286   19417 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:59.274295   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:59.298764   19417 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:59.298773   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:59.337984   19417 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:59.337994   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:59.342643   19417 logs.go:123] Gathering logs for coredns [ffe7423d8fb8] ...
	I0819 11:52:59.342653   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe7423d8fb8"
	I0819 11:52:59.358554   19417 logs.go:123] Gathering logs for coredns [3d2d44fde489] ...
	I0819 11:52:59.358564   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d2d44fde489"
	I0819 11:52:59.370751   19417 logs.go:123] Gathering logs for kube-controller-manager [b3325243ca65] ...
	I0819 11:52:59.370764   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3325243ca65"
	I0819 11:52:59.389307   19417 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:59.389318   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:59.427397   19417 logs.go:123] Gathering logs for coredns [eb0595d1949c] ...
	I0819 11:52:59.427409   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb0595d1949c"
	I0819 11:52:59.440215   19417 logs.go:123] Gathering logs for coredns [db6dffb8b3ea] ...
	I0819 11:52:59.440226   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db6dffb8b3ea"
	I0819 11:52:59.453770   19417 logs.go:123] Gathering logs for kube-proxy [26ea981cbd0c] ...
	I0819 11:52:59.453785   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26ea981cbd0c"
	I0819 11:52:59.465390   19417 logs.go:123] Gathering logs for etcd [4122edd3dc51] ...
	I0819 11:52:59.465400   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4122edd3dc51"
	I0819 11:52:59.486005   19417 logs.go:123] Gathering logs for kube-scheduler [2cfe471024b0] ...
	I0819 11:52:59.486016   19417 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfe471024b0"
	I0819 11:52:59.502921   19417 logs.go:123] Gathering logs for container status ...
	I0819 11:52:59.502931   19417 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:02.016351   19417 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:07.018505   19417 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:07.022610   19417 out.go:201] 
	W0819 11:53:07.025511   19417 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 11:53:07.025516   19417 out.go:270] * 
	W0819 11:53:07.025923   19417 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:53:07.039504   19417 out.go:201] 
	I0819 11:53:04.607833   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:04.607876   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:09.610107   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:09.610144   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:14.612320   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:14.612360   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-19 18:43:51 UTC, ends at Mon 2024-08-19 18:53:23 UTC. --
	Aug 19 18:53:07 running-upgrade-409000 dockerd[3271]: time="2024-08-19T18:53:07.271117596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 19 18:53:07 running-upgrade-409000 dockerd[3271]: time="2024-08-19T18:53:07.271254634Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c7ffeeb2e9575e12f793592644aafd7ffe857f36b52eed7f663ee69b5440e874 pid=18847 runtime=io.containerd.runc.v2
	Aug 19 18:53:07 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:07Z" level=error msg="ContainerStats resp: {0x400009dec0 linux}"
	Aug 19 18:53:07 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:07Z" level=error msg="ContainerStats resp: {0x40009c2840 linux}"
	Aug 19 18:53:08 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:08Z" level=error msg="ContainerStats resp: {0x40007f6680 linux}"
	Aug 19 18:53:08 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x40007f7b40 linux}"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x400092df40 linux}"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x40008ae380 linux}"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x40008ae540 linux}"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x40008ae880 linux}"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x40006458c0 linux}"
	Aug 19 18:53:09 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:09Z" level=error msg="ContainerStats resp: {0x4000645cc0 linux}"
	Aug 19 18:53:13 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:13Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:53:18 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:18Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 19 18:53:19 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:19Z" level=error msg="ContainerStats resp: {0x400047b7c0 linux}"
	Aug 19 18:53:19 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:19Z" level=error msg="ContainerStats resp: {0x4000822380 linux}"
	Aug 19 18:53:20 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:20Z" level=error msg="ContainerStats resp: {0x4000823c00 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x40007f6dc0 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x40007f71c0 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x40007f75c0 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x40006442c0 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x4000644740 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x400092d500 linux}"
	Aug 19 18:53:21 running-upgrade-409000 cri-dockerd[3111]: time="2024-08-19T18:53:21Z" level=error msg="ContainerStats resp: {0x4000645180 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c7ffeeb2e9575       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   ff1770ee93ee7
	bf6b07ebda395       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   75f399a7bf551
	ffe7423d8fb8a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ff1770ee93ee7
	eb0595d1949cd       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   75f399a7bf551
	74c6a87ae3c81       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   1f12b226246a3
	26ea981cbd0cc       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   b895ae35c2113
	2cfe471024b0c       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   89ff949e65de3
	4122edd3dc514       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   bb1f7845c1898
	b3325243ca656       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   ec33966ad7ed2
	45f3dc60aedc0       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4b4a2262de779
	
	
	==> coredns [bf6b07ebda39] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2577496805199132365.697320191124626926. HINFO: read udp 10.244.0.2:58484->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2577496805199132365.697320191124626926. HINFO: read udp 10.244.0.2:34843->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2577496805199132365.697320191124626926. HINFO: read udp 10.244.0.2:40320->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c7ffeeb2e957] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1108649111533402500.7187945678071030082. HINFO: read udp 10.244.0.3:53591->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1108649111533402500.7187945678071030082. HINFO: read udp 10.244.0.3:46549->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1108649111533402500.7187945678071030082. HINFO: read udp 10.244.0.3:53434->10.0.2.3:53: i/o timeout
	
	
	==> coredns [eb0595d1949c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:39163->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:59280->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:51410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:39285->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:40482->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:52889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:57934->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:40097->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:56734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7732049387244700403.1127348883368174862. HINFO: read udp 10.244.0.2:44690->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ffe7423d8fb8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:48668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:51678->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:35254->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:51915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:40253->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:44320->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:45583->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:59983->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:57273->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2717865598420770022.3210252438873485136. HINFO: read udp 10.244.0.3:59312->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-409000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-409000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8
	                    minikube.k8s.io/name=running-upgrade-409000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_49_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:49:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-409000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:53:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:49:05 +0000   Mon, 19 Aug 2024 18:49:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:49:05 +0000   Mon, 19 Aug 2024 18:49:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:49:05 +0000   Mon, 19 Aug 2024 18:49:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:49:05 +0000   Mon, 19 Aug 2024 18:49:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-409000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 de411f73536d447594b066911a6f451d
	  System UUID:                de411f73536d447594b066911a6f451d
	  Boot ID:                    2123f59e-4c8f-4af2-9e55-3d299ccc5598
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bm9nn                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-dngnn                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-409000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-409000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-409000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-7g92r                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-409000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-409000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-409000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-409000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-409000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-409000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-409000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-409000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-409000 event: Registered Node running-upgrade-409000 in Controller
	
	
	==> dmesg <==
	[  +2.786785] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.148862] systemd-fstab-generator[872]: Ignoring "noauto" for root device
	[  +0.066605] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.063071] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +1.208236] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.058522] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.352149] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +24.223804] systemd-fstab-generator[2001]: Ignoring "noauto" for root device
	[  +2.367716] systemd-fstab-generator[2278]: Ignoring "noauto" for root device
	[  +0.136564] systemd-fstab-generator[2312]: Ignoring "noauto" for root device
	[  +0.079157] systemd-fstab-generator[2326]: Ignoring "noauto" for root device
	[  +0.080971] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[ +13.120953] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.194070] systemd-fstab-generator[3067]: Ignoring "noauto" for root device
	[  +0.069276] systemd-fstab-generator[3079]: Ignoring "noauto" for root device
	[  +0.062235] systemd-fstab-generator[3090]: Ignoring "noauto" for root device
	[  +0.073414] systemd-fstab-generator[3104]: Ignoring "noauto" for root device
	[  +2.290284] systemd-fstab-generator[3258]: Ignoring "noauto" for root device
	[  +3.539875] systemd-fstab-generator[3656]: Ignoring "noauto" for root device
	[  +1.192534] systemd-fstab-generator[3947]: Ignoring "noauto" for root device
	[Aug19 18:45] kauditd_printk_skb: 68 callbacks suppressed
	[Aug19 18:48] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.492829] systemd-fstab-generator[11971]: Ignoring "noauto" for root device
	[Aug19 18:49] systemd-fstab-generator[12566]: Ignoring "noauto" for root device
	[  +0.465258] systemd-fstab-generator[12702]: Ignoring "noauto" for root device
	
	
	==> etcd [4122edd3dc51] <==
	{"level":"info","ts":"2024-08-19T18:49:01.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-19T18:49:01.547Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T18:49:01.547Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T18:49:01.547Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T18:49:01.547Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T18:49:01.547Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-19T18:49:01.551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-19T18:49:01.828Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-409000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T18:49:01.829Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:49:01.829Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T18:49:01.833Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:49:01.834Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T18:49:01.834Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-19T18:49:01.839Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T18:49:01.839Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T18:49:01.844Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:49:01.844Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T18:49:01.844Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:53:23 up 9 min,  0 users,  load average: 0.55, 0.56, 0.31
	Linux running-upgrade-409000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [45f3dc60aedc] <==
	I0819 18:49:03.053886       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0819 18:49:03.071721       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0819 18:49:03.072543       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0819 18:49:03.072563       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0819 18:49:03.072569       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:49:03.073120       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:49:03.093923       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0819 18:49:03.807034       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 18:49:03.981942       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0819 18:49:03.988346       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0819 18:49:03.988377       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 18:49:04.128673       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 18:49:04.138751       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0819 18:49:04.228712       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0819 18:49:04.231226       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0819 18:49:04.231614       1 controller.go:611] quota admission added evaluator for: endpoints
	I0819 18:49:04.232930       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 18:49:05.127387       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0819 18:49:05.824773       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0819 18:49:05.827677       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0819 18:49:05.831735       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0819 18:49:05.893295       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:49:18.564014       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0819 18:49:18.613727       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0819 18:49:19.698331       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [b3325243ca65] <==
	I0819 18:49:18.111045       1 shared_informer.go:262] Caches are synced for TTL
	I0819 18:49:18.111050       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0819 18:49:18.111898       1 shared_informer.go:262] Caches are synced for taint
	I0819 18:49:18.111961       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0819 18:49:18.112005       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-409000. Assuming now as a timestamp.
	I0819 18:49:18.112048       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0819 18:49:18.111912       1 shared_informer.go:262] Caches are synced for stateful set
	I0819 18:49:18.112110       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0819 18:49:18.112208       1 event.go:294] "Event occurred" object="running-upgrade-409000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-409000 event: Registered Node running-upgrade-409000 in Controller"
	I0819 18:49:18.114141       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0819 18:49:18.117277       1 shared_informer.go:262] Caches are synced for persistent volume
	I0819 18:49:18.120079       1 shared_informer.go:262] Caches are synced for attach detach
	I0819 18:49:18.245341       1 shared_informer.go:262] Caches are synced for namespace
	I0819 18:49:18.262379       1 shared_informer.go:262] Caches are synced for service account
	I0819 18:49:18.311266       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 18:49:18.311269       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0819 18:49:18.312403       1 shared_informer.go:262] Caches are synced for endpoint
	I0819 18:49:18.314519       1 shared_informer.go:262] Caches are synced for resource quota
	I0819 18:49:18.566025       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0819 18:49:18.618801       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7g92r"
	I0819 18:49:18.730301       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 18:49:18.811595       1 shared_informer.go:262] Caches are synced for garbage collector
	I0819 18:49:18.811609       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 18:49:19.113190       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-bm9nn"
	I0819 18:49:19.117736       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dngnn"
	
	
	==> kube-proxy [26ea981cbd0c] <==
	I0819 18:49:19.687047       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0819 18:49:19.687074       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0819 18:49:19.687083       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0819 18:49:19.696254       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0819 18:49:19.696265       1 server_others.go:206] "Using iptables Proxier"
	I0819 18:49:19.696325       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0819 18:49:19.696484       1 server.go:661] "Version info" version="v1.24.1"
	I0819 18:49:19.696510       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:49:19.696772       1 config.go:317] "Starting service config controller"
	I0819 18:49:19.696783       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0819 18:49:19.696790       1 config.go:226] "Starting endpoint slice config controller"
	I0819 18:49:19.696816       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0819 18:49:19.697079       1 config.go:444] "Starting node config controller"
	I0819 18:49:19.697101       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0819 18:49:19.797631       1 shared_informer.go:262] Caches are synced for node config
	I0819 18:49:19.797674       1 shared_informer.go:262] Caches are synced for service config
	I0819 18:49:19.797733       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2cfe471024b0] <==
	W0819 18:49:03.035751       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:49:03.035759       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 18:49:03.035778       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:49:03.035785       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0819 18:49:03.035797       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:49:03.035800       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0819 18:49:03.035811       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 18:49:03.035814       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0819 18:49:03.035829       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:49:03.035832       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0819 18:49:03.035848       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:49:03.035855       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0819 18:49:03.035880       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 18:49:03.035921       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0819 18:49:03.866770       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 18:49:03.866808       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0819 18:49:03.946960       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:49:03.947031       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0819 18:49:03.975615       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 18:49:03.975660       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0819 18:49:03.975738       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 18:49:03.975751       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0819 18:49:04.076436       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:49:04.076526       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0819 18:49:04.634840       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-19 18:43:51 UTC, ends at Mon 2024-08-19 18:53:23 UTC. --
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.224304   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fd2335f4-37cf-4af9-9eb3-3947d08f366b-tmp\") pod \"storage-provisioner\" (UID: \"fd2335f4-37cf-4af9-9eb3-3947d08f366b\") " pod="kube-system/storage-provisioner"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.224328   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chl98\" (UniqueName: \"kubernetes.io/projected/fd2335f4-37cf-4af9-9eb3-3947d08f366b-kube-api-access-chl98\") pod \"storage-provisioner\" (UID: \"fd2335f4-37cf-4af9-9eb3-3947d08f366b\") " pod="kube-system/storage-provisioner"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.327970   12572 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.327988   12572 projected.go:192] Error preparing data for projected volume kube-api-access-chl98 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.328020   12572 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/fd2335f4-37cf-4af9-9eb3-3947d08f366b-kube-api-access-chl98 podName:fd2335f4-37cf-4af9-9eb3-3947d08f366b nodeName:}" failed. No retries permitted until 2024-08-19 18:49:18.828007679 +0000 UTC m=+13.021150016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-chl98" (UniqueName: "kubernetes.io/projected/fd2335f4-37cf-4af9-9eb3-3947d08f366b-kube-api-access-chl98") pod "storage-provisioner" (UID: "fd2335f4-37cf-4af9-9eb3-3947d08f366b") : configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.623906   12572 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.728714   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/769b31ca-b8d6-46ee-9ad5-26d0677df7d6-xtables-lock\") pod \"kube-proxy-7g92r\" (UID: \"769b31ca-b8d6-46ee-9ad5-26d0677df7d6\") " pod="kube-system/kube-proxy-7g92r"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.728764   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f5n9\" (UniqueName: \"kubernetes.io/projected/769b31ca-b8d6-46ee-9ad5-26d0677df7d6-kube-api-access-5f5n9\") pod \"kube-proxy-7g92r\" (UID: \"769b31ca-b8d6-46ee-9ad5-26d0677df7d6\") " pod="kube-system/kube-proxy-7g92r"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.728777   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/769b31ca-b8d6-46ee-9ad5-26d0677df7d6-lib-modules\") pod \"kube-proxy-7g92r\" (UID: \"769b31ca-b8d6-46ee-9ad5-26d0677df7d6\") " pod="kube-system/kube-proxy-7g92r"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: I0819 18:49:18.728788   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/769b31ca-b8d6-46ee-9ad5-26d0677df7d6-kube-proxy\") pod \"kube-proxy-7g92r\" (UID: \"769b31ca-b8d6-46ee-9ad5-26d0677df7d6\") " pod="kube-system/kube-proxy-7g92r"
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.829937   12572 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.829955   12572 projected.go:192] Error preparing data for projected volume kube-api-access-chl98 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.829977   12572 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/fd2335f4-37cf-4af9-9eb3-3947d08f366b-kube-api-access-chl98 podName:fd2335f4-37cf-4af9-9eb3-3947d08f366b nodeName:}" failed. No retries permitted until 2024-08-19 18:49:19.829967716 +0000 UTC m=+14.023110054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-chl98" (UniqueName: "kubernetes.io/projected/fd2335f4-37cf-4af9-9eb3-3947d08f366b-kube-api-access-chl98") pod "storage-provisioner" (UID: "fd2335f4-37cf-4af9-9eb3-3947d08f366b") : configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.833372   12572 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.833429   12572 projected.go:192] Error preparing data for projected volume kube-api-access-5f5n9 for pod kube-system/kube-proxy-7g92r: configmap "kube-root-ca.crt" not found
	Aug 19 18:49:18 running-upgrade-409000 kubelet[12572]: E0819 18:49:18.833460   12572 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/769b31ca-b8d6-46ee-9ad5-26d0677df7d6-kube-api-access-5f5n9 podName:769b31ca-b8d6-46ee-9ad5-26d0677df7d6 nodeName:}" failed. No retries permitted until 2024-08-19 18:49:19.333452392 +0000 UTC m=+13.526594730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5f5n9" (UniqueName: "kubernetes.io/projected/769b31ca-b8d6-46ee-9ad5-26d0677df7d6-kube-api-access-5f5n9") pod "kube-proxy-7g92r" (UID: "769b31ca-b8d6-46ee-9ad5-26d0677df7d6") : configmap "kube-root-ca.crt" not found
	Aug 19 18:49:19 running-upgrade-409000 kubelet[12572]: I0819 18:49:19.115877   12572 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:49:19 running-upgrade-409000 kubelet[12572]: I0819 18:49:19.119196   12572 topology_manager.go:200] "Topology Admit Handler"
	Aug 19 18:49:19 running-upgrade-409000 kubelet[12572]: I0819 18:49:19.232854   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f6c28cc-f88e-4931-90c6-fc000939aa7c-config-volume\") pod \"coredns-6d4b75cb6d-bm9nn\" (UID: \"7f6c28cc-f88e-4931-90c6-fc000939aa7c\") " pod="kube-system/coredns-6d4b75cb6d-bm9nn"
	Aug 19 18:49:19 running-upgrade-409000 kubelet[12572]: I0819 18:49:19.232881   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zng2l\" (UniqueName: \"kubernetes.io/projected/0b7a3efd-040a-478b-8f59-e738e64268da-kube-api-access-zng2l\") pod \"coredns-6d4b75cb6d-dngnn\" (UID: \"0b7a3efd-040a-478b-8f59-e738e64268da\") " pod="kube-system/coredns-6d4b75cb6d-dngnn"
	Aug 19 18:49:19 running-upgrade-409000 kubelet[12572]: I0819 18:49:19.232930   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b7a3efd-040a-478b-8f59-e738e64268da-config-volume\") pod \"coredns-6d4b75cb6d-dngnn\" (UID: \"0b7a3efd-040a-478b-8f59-e738e64268da\") " pod="kube-system/coredns-6d4b75cb6d-dngnn"
	Aug 19 18:49:19 running-upgrade-409000 kubelet[12572]: I0819 18:49:19.232944   12572 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwj2\" (UniqueName: \"kubernetes.io/projected/7f6c28cc-f88e-4931-90c6-fc000939aa7c-kube-api-access-gjwj2\") pod \"coredns-6d4b75cb6d-bm9nn\" (UID: \"7f6c28cc-f88e-4931-90c6-fc000939aa7c\") " pod="kube-system/coredns-6d4b75cb6d-bm9nn"
	Aug 19 18:49:20 running-upgrade-409000 kubelet[12572]: I0819 18:49:20.076273   12572 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="1f12b226246a305ca1818ce956b6dd8966b5d749ea5b39840c7898942d7b8781"
	Aug 19 18:53:07 running-upgrade-409000 kubelet[12572]: I0819 18:53:07.585481   12572 scope.go:110] "RemoveContainer" containerID="db6dffb8b3eadac4243f8b02f83d3bde65b274d09c6284b189d9205260922842"
	Aug 19 18:53:07 running-upgrade-409000 kubelet[12572]: I0819 18:53:07.596429   12572 scope.go:110] "RemoveContainer" containerID="3d2d44fde489e03f7e08d5d5ee51d112180a86cba0bafde93d02fa5f5bb62f37"
	
	
	==> storage-provisioner [74c6a87ae3c8] <==
	I0819 18:49:20.191213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 18:49:20.228394       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 18:49:20.228726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 18:49:20.233890       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 18:49:20.235001       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1308cc2b-9efb-4378-9fa7-e3b1d4eef233", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-409000_b2184bc1-bafa-49ec-ad41-9d32556cb9b4 became leader
	I0819 18:49:20.235065       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-409000_b2184bc1-bafa-49ec-ad41-9d32556cb9b4!
	I0819 18:49:20.336421       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-409000_b2184bc1-bafa-49ec-ad41-9d32556cb9b4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-409000 -n running-upgrade-409000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-409000 -n running-upgrade-409000: exit status 2 (15.656925458s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-409000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-409000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-409000
--- FAIL: TestRunningBinaryUpgrade (613.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-246000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-246000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.152310334s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-246000" primary control-plane node in "kubernetes-upgrade-246000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:46:26.461251   19479 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:46:26.461379   19479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:46:26.461382   19479 out.go:358] Setting ErrFile to fd 2...
	I0819 11:46:26.461384   19479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:46:26.461521   19479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:46:26.462633   19479 out.go:352] Setting JSON to false
	I0819 11:46:26.479482   19479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8153,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:46:26.479557   19479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:46:26.486219   19479 out.go:177] * [kubernetes-upgrade-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:46:26.493222   19479 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:46:26.493273   19479 notify.go:220] Checking for updates...
	I0819 11:46:26.499163   19479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:46:26.502200   19479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:46:26.505199   19479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:46:26.508122   19479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:46:26.511193   19479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:46:26.514570   19479 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:46:26.514637   19479 config.go:182] Loaded profile config "running-upgrade-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:46:26.514692   19479 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:46:26.519166   19479 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:46:26.526170   19479 start.go:297] selected driver: qemu2
	I0819 11:46:26.526180   19479 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:46:26.526186   19479 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:46:26.528538   19479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:46:26.531097   19479 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:46:26.534310   19479 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:46:26.534357   19479 cni.go:84] Creating CNI manager for ""
	I0819 11:46:26.534364   19479 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:46:26.534386   19479 start.go:340] cluster config:
	{Name:kubernetes-upgrade-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:46:26.537974   19479 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:46:26.542217   19479 out.go:177] * Starting "kubernetes-upgrade-246000" primary control-plane node in "kubernetes-upgrade-246000" cluster
	I0819 11:46:26.550114   19479 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:46:26.550131   19479 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:46:26.550138   19479 cache.go:56] Caching tarball of preloaded images
	I0819 11:46:26.550188   19479 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:46:26.550193   19479 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:46:26.550245   19479 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kubernetes-upgrade-246000/config.json ...
	I0819 11:46:26.550255   19479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kubernetes-upgrade-246000/config.json: {Name:mk075e136bca8957311b02ffad32d96596de7b17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:46:26.550555   19479 start.go:360] acquireMachinesLock for kubernetes-upgrade-246000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:46:26.550585   19479 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "kubernetes-upgrade-246000"
	I0819 11:46:26.550595   19479 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:46:26.550616   19479 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:46:26.555152   19479 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:46:26.580339   19479 start.go:159] libmachine.API.Create for "kubernetes-upgrade-246000" (driver="qemu2")
	I0819 11:46:26.580370   19479 client.go:168] LocalClient.Create starting
	I0819 11:46:26.580449   19479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:46:26.580486   19479 main.go:141] libmachine: Decoding PEM data...
	I0819 11:46:26.580494   19479 main.go:141] libmachine: Parsing certificate...
	I0819 11:46:26.580539   19479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:46:26.580563   19479 main.go:141] libmachine: Decoding PEM data...
	I0819 11:46:26.580572   19479 main.go:141] libmachine: Parsing certificate...
	I0819 11:46:26.580915   19479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:46:27.089463   19479 main.go:141] libmachine: Creating SSH key...
	I0819 11:46:27.115945   19479 main.go:141] libmachine: Creating Disk image...
	I0819 11:46:27.115951   19479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:46:27.116157   19479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:27.132300   19479 main.go:141] libmachine: STDOUT: 
	I0819 11:46:27.132323   19479 main.go:141] libmachine: STDERR: 
	I0819 11:46:27.132374   19479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2 +20000M
	I0819 11:46:27.140529   19479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:46:27.140543   19479 main.go:141] libmachine: STDERR: 
	I0819 11:46:27.140562   19479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:27.140572   19479 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:46:27.140583   19479 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:46:27.140606   19479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:1a:2d:da:4a:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:27.142160   19479 main.go:141] libmachine: STDOUT: 
	I0819 11:46:27.142174   19479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:46:27.142193   19479 client.go:171] duration metric: took 561.818792ms to LocalClient.Create
	I0819 11:46:29.144398   19479 start.go:128] duration metric: took 2.593766958s to createHost
	I0819 11:46:29.144478   19479 start.go:83] releasing machines lock for "kubernetes-upgrade-246000", held for 2.593896833s
	W0819 11:46:29.144602   19479 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:46:29.159875   19479 out.go:177] * Deleting "kubernetes-upgrade-246000" in qemu2 ...
	W0819 11:46:29.189935   19479 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:46:29.189962   19479 start.go:729] Will try again in 5 seconds ...
	I0819 11:46:34.192190   19479 start.go:360] acquireMachinesLock for kubernetes-upgrade-246000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:46:34.192775   19479 start.go:364] duration metric: took 477.917µs to acquireMachinesLock for "kubernetes-upgrade-246000"
	I0819 11:46:34.192971   19479 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:46:34.193361   19479 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:46:34.198120   19479 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:46:34.248598   19479 start.go:159] libmachine.API.Create for "kubernetes-upgrade-246000" (driver="qemu2")
	I0819 11:46:34.248651   19479 client.go:168] LocalClient.Create starting
	I0819 11:46:34.248775   19479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:46:34.248839   19479 main.go:141] libmachine: Decoding PEM data...
	I0819 11:46:34.248859   19479 main.go:141] libmachine: Parsing certificate...
	I0819 11:46:34.248925   19479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:46:34.248972   19479 main.go:141] libmachine: Decoding PEM data...
	I0819 11:46:34.248982   19479 main.go:141] libmachine: Parsing certificate...
	I0819 11:46:34.249525   19479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:46:34.413119   19479 main.go:141] libmachine: Creating SSH key...
	I0819 11:46:34.529317   19479 main.go:141] libmachine: Creating Disk image...
	I0819 11:46:34.529325   19479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:46:34.529590   19479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:34.538982   19479 main.go:141] libmachine: STDOUT: 
	I0819 11:46:34.539002   19479 main.go:141] libmachine: STDERR: 
	I0819 11:46:34.539057   19479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2 +20000M
	I0819 11:46:34.547264   19479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:46:34.547283   19479 main.go:141] libmachine: STDERR: 
	I0819 11:46:34.547297   19479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:34.547301   19479 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:46:34.547312   19479 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:46:34.547338   19479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d7:18:b1:0b:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:34.549072   19479 main.go:141] libmachine: STDOUT: 
	I0819 11:46:34.549091   19479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:46:34.549104   19479 client.go:171] duration metric: took 300.449333ms to LocalClient.Create
	I0819 11:46:36.551218   19479 start.go:128] duration metric: took 2.357836667s to createHost
	I0819 11:46:36.551267   19479 start.go:83] releasing machines lock for "kubernetes-upgrade-246000", held for 2.358430458s
	W0819 11:46:36.551366   19479 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:46:36.559640   19479 out.go:201] 
	W0819 11:46:36.565662   19479 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:46:36.565669   19479 out.go:270] * 
	* 
	W0819 11:46:36.566392   19479 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:46:36.575585   19479 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-246000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-246000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-246000: (2.128731208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-246000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-246000 status --format={{.Host}}: exit status 7 (63.057416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-246000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-246000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182710208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-246000" primary control-plane node in "kubernetes-upgrade-246000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-246000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:46:38.809514   19507 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:46:38.809670   19507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:46:38.809674   19507 out.go:358] Setting ErrFile to fd 2...
	I0819 11:46:38.809676   19507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:46:38.809805   19507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:46:38.810833   19507 out.go:352] Setting JSON to false
	I0819 11:46:38.827313   19507 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8165,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:46:38.827382   19507 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:46:38.832497   19507 out.go:177] * [kubernetes-upgrade-246000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:46:38.839466   19507 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:46:38.839523   19507 notify.go:220] Checking for updates...
	I0819 11:46:38.846341   19507 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:46:38.850405   19507 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:46:38.853444   19507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:46:38.856320   19507 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:46:38.859526   19507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:46:38.862697   19507 config.go:182] Loaded profile config "kubernetes-upgrade-246000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:46:38.862946   19507 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:46:38.866391   19507 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:46:38.873428   19507 start.go:297] selected driver: qemu2
	I0819 11:46:38.873433   19507 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:46:38.873481   19507 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:46:38.875511   19507 cni.go:84] Creating CNI manager for ""
	I0819 11:46:38.875528   19507 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:46:38.875557   19507 start.go:340] cluster config:
	{Name:kubernetes-upgrade-246000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-246000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:46:38.878726   19507 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:46:38.886461   19507 out.go:177] * Starting "kubernetes-upgrade-246000" primary control-plane node in "kubernetes-upgrade-246000" cluster
	I0819 11:46:38.890403   19507 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:46:38.890416   19507 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:46:38.890420   19507 cache.go:56] Caching tarball of preloaded images
	I0819 11:46:38.890470   19507 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:46:38.890475   19507 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:46:38.890524   19507 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kubernetes-upgrade-246000/config.json ...
	I0819 11:46:38.890925   19507 start.go:360] acquireMachinesLock for kubernetes-upgrade-246000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:46:38.890951   19507 start.go:364] duration metric: took 20.667µs to acquireMachinesLock for "kubernetes-upgrade-246000"
	I0819 11:46:38.890964   19507 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:46:38.890968   19507 fix.go:54] fixHost starting: 
	I0819 11:46:38.891081   19507 fix.go:112] recreateIfNeeded on kubernetes-upgrade-246000: state=Stopped err=<nil>
	W0819 11:46:38.891091   19507 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:46:38.894444   19507 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-246000" ...
	I0819 11:46:38.902379   19507 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:46:38.902427   19507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d7:18:b1:0b:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:38.904280   19507 main.go:141] libmachine: STDOUT: 
	I0819 11:46:38.904304   19507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:46:38.904332   19507 fix.go:56] duration metric: took 13.363084ms for fixHost
	I0819 11:46:38.904336   19507 start.go:83] releasing machines lock for "kubernetes-upgrade-246000", held for 13.380708ms
	W0819 11:46:38.904342   19507 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:46:38.904379   19507 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:46:38.904383   19507 start.go:729] Will try again in 5 seconds ...
	I0819 11:46:43.906533   19507 start.go:360] acquireMachinesLock for kubernetes-upgrade-246000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:46:43.907035   19507 start.go:364] duration metric: took 410.166µs to acquireMachinesLock for "kubernetes-upgrade-246000"
	I0819 11:46:43.907191   19507 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:46:43.907236   19507 fix.go:54] fixHost starting: 
	I0819 11:46:43.907961   19507 fix.go:112] recreateIfNeeded on kubernetes-upgrade-246000: state=Stopped err=<nil>
	W0819 11:46:43.907988   19507 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:46:43.913432   19507 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-246000" ...
	I0819 11:46:43.921325   19507 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:46:43.921509   19507 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d7:18:b1:0b:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubernetes-upgrade-246000/disk.qcow2
	I0819 11:46:43.928855   19507 main.go:141] libmachine: STDOUT: 
	I0819 11:46:43.928924   19507 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:46:43.928998   19507 fix.go:56] duration metric: took 21.790042ms for fixHost
	I0819 11:46:43.929013   19507 start.go:83] releasing machines lock for "kubernetes-upgrade-246000", held for 21.954875ms
	W0819 11:46:43.929186   19507 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-246000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:46:43.936492   19507 out.go:201] 
	W0819 11:46:43.940438   19507 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:46:43.940459   19507 out.go:270] * 
	* 
	W0819 11:46:43.941852   19507 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:46:43.951350   19507 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-246000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-246000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-246000 version --output=json: exit status 1 (39.6805ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-246000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-19 11:46:44.002029 -0700 PDT m=+928.188372835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-246000 -n kubernetes-upgrade-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-246000 -n kubernetes-upgrade-246000: exit status 7 (33.443666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-246000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-246000
--- FAIL: TestKubernetesUpgrade (17.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2871753891/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.95s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.67s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3652661904/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2364185691 start -p stopped-upgrade-604000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2364185691 start -p stopped-upgrade-604000 --memory=2200 --vm-driver=qemu2 : (40.649509125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2364185691 -p stopped-upgrade-604000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2364185691 -p stopped-upgrade-604000 stop: (12.113412083s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-604000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-604000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.622797625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-604000" primary control-plane node in "stopped-upgrade-604000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-604000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:47:38.062992   19545 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:47:38.063131   19545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:47:38.063135   19545 out.go:358] Setting ErrFile to fd 2...
	I0819 11:47:38.063138   19545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:47:38.063284   19545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:47:38.064378   19545 out.go:352] Setting JSON to false
	I0819 11:47:38.082571   19545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8225,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:47:38.082641   19545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:47:38.087776   19545 out.go:177] * [stopped-upgrade-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:47:38.095754   19545 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:47:38.095802   19545 notify.go:220] Checking for updates...
	I0819 11:47:38.102620   19545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:47:38.105755   19545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:47:38.108637   19545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:47:38.111690   19545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:47:38.114670   19545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:47:38.117876   19545 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:47:38.120672   19545 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:47:38.123659   19545 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:47:38.127674   19545 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:47:38.134696   19545 start.go:297] selected driver: qemu2
	I0819 11:47:38.134701   19545 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:47:38.134749   19545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:47:38.137219   19545 cni.go:84] Creating CNI manager for ""
	I0819 11:47:38.137238   19545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:47:38.137266   19545 start.go:340] cluster config:
	{Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:47:38.137318   19545 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:47:38.145689   19545 out.go:177] * Starting "stopped-upgrade-604000" primary control-plane node in "stopped-upgrade-604000" cluster
	I0819 11:47:38.149714   19545 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:47:38.149733   19545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0819 11:47:38.149742   19545 cache.go:56] Caching tarball of preloaded images
	I0819 11:47:38.149807   19545 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:47:38.149813   19545 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0819 11:47:38.149876   19545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/config.json ...
	I0819 11:47:38.150314   19545 start.go:360] acquireMachinesLock for stopped-upgrade-604000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:47:38.150344   19545 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "stopped-upgrade-604000"
	I0819 11:47:38.150354   19545 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:47:38.150359   19545 fix.go:54] fixHost starting: 
	I0819 11:47:38.150486   19545 fix.go:112] recreateIfNeeded on stopped-upgrade-604000: state=Stopped err=<nil>
	W0819 11:47:38.150494   19545 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:47:38.158675   19545 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-604000" ...
	I0819 11:47:38.162644   19545 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:47:38.162714   19545 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53326-:22,hostfwd=tcp::53327-:2376,hostname=stopped-upgrade-604000 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/disk.qcow2
	I0819 11:47:38.209512   19545 main.go:141] libmachine: STDOUT: 
	I0819 11:47:38.209563   19545 main.go:141] libmachine: STDERR: 
	I0819 11:47:38.209569   19545 main.go:141] libmachine: Waiting for VM to start (ssh -p 53326 docker@127.0.0.1)...
	I0819 11:47:59.141607   19545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/config.json ...
	I0819 11:47:59.142471   19545 machine.go:93] provisionDockerMachine start ...
	I0819 11:47:59.142646   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.143262   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.143277   19545 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:47:59.219092   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0819 11:47:59.219127   19545 buildroot.go:166] provisioning hostname "stopped-upgrade-604000"
	I0819 11:47:59.219254   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.219496   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.219509   19545 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-604000 && echo "stopped-upgrade-604000" | sudo tee /etc/hostname
	I0819 11:47:59.285807   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-604000
	
	I0819 11:47:59.285861   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.285987   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.285999   19545 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-604000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-604000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-604000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:47:59.341515   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:47:59.341526   19545 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-17178/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-17178/.minikube}
	I0819 11:47:59.341537   19545 buildroot.go:174] setting up certificates
	I0819 11:47:59.341541   19545 provision.go:84] configureAuth start
	I0819 11:47:59.341550   19545 provision.go:143] copyHostCerts
	I0819 11:47:59.341612   19545 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem, removing ...
	I0819 11:47:59.341618   19545 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem
	I0819 11:47:59.341717   19545 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.pem (1082 bytes)
	I0819 11:47:59.341903   19545 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem, removing ...
	I0819 11:47:59.341907   19545 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem
	I0819 11:47:59.341953   19545 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/cert.pem (1123 bytes)
	I0819 11:47:59.342056   19545 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem, removing ...
	I0819 11:47:59.342059   19545 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem
	I0819 11:47:59.342099   19545 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-17178/.minikube/key.pem (1679 bytes)
	I0819 11:47:59.342188   19545 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-604000 san=[127.0.0.1 localhost minikube stopped-upgrade-604000]
	I0819 11:47:59.387432   19545 provision.go:177] copyRemoteCerts
	I0819 11:47:59.387472   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:47:59.387481   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:47:59.418246   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:47:59.424690   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0819 11:47:59.431180   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 11:47:59.438295   19545 provision.go:87] duration metric: took 96.744084ms to configureAuth
	I0819 11:47:59.438304   19545 buildroot.go:189] setting minikube options for container-runtime
	I0819 11:47:59.438418   19545 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:47:59.438456   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.438541   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.438545   19545 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0819 11:47:59.491709   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0819 11:47:59.491717   19545 buildroot.go:70] root file system type: tmpfs
	I0819 11:47:59.491764   19545 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0819 11:47:59.491812   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.491929   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.491962   19545 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0819 11:47:59.551032   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0819 11:47:59.551080   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.551194   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.551206   19545 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0819 11:47:59.915884   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0819 11:47:59.915898   19545 machine.go:96] duration metric: took 773.418916ms to provisionDockerMachine
	I0819 11:47:59.915905   19545 start.go:293] postStartSetup for "stopped-upgrade-604000" (driver="qemu2")
	I0819 11:47:59.915911   19545 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:47:59.915981   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:47:59.915993   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:47:59.947472   19545 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:47:59.948887   19545 info.go:137] Remote host: Buildroot 2021.02.12
	I0819 11:47:59.948897   19545 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-17178/.minikube/addons for local assets ...
	I0819 11:47:59.948980   19545 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-17178/.minikube/files for local assets ...
	I0819 11:47:59.949072   19545 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem -> 176542.pem in /etc/ssl/certs
	I0819 11:47:59.949164   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 11:47:59.951752   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem --> /etc/ssl/certs/176542.pem (1708 bytes)
	I0819 11:47:59.958919   19545 start.go:296] duration metric: took 43.009459ms for postStartSetup
	I0819 11:47:59.958933   19545 fix.go:56] duration metric: took 21.808678458s for fixHost
	I0819 11:47:59.958967   19545 main.go:141] libmachine: Using SSH client type: native
	I0819 11:47:59.959073   19545 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1045345a0] 0x104536e00 <nil>  [] 0s} localhost 53326 <nil> <nil>}
	I0819 11:47:59.959077   19545 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0819 11:48:00.011252   19545 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724093280.007042630
	
	I0819 11:48:00.011261   19545 fix.go:216] guest clock: 1724093280.007042630
	I0819 11:48:00.011265   19545 fix.go:229] Guest: 2024-08-19 11:48:00.00704263 -0700 PDT Remote: 2024-08-19 11:47:59.958935 -0700 PDT m=+21.922470459 (delta=48.10763ms)
	I0819 11:48:00.011276   19545 fix.go:200] guest clock delta is within tolerance: 48.10763ms
	I0819 11:48:00.011279   19545 start.go:83] releasing machines lock for "stopped-upgrade-604000", held for 21.86103475s
	I0819 11:48:00.011346   19545 ssh_runner.go:195] Run: cat /version.json
	I0819 11:48:00.011350   19545 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:48:00.011358   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:48:00.011375   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	W0819 11:48:00.012058   19545 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53326: connect: connection refused
	I0819 11:48:00.012074   19545 retry.go:31] will retry after 309.091232ms: dial tcp [::1]:53326: connect: connection refused
	W0819 11:48:00.357567   19545 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0819 11:48:00.357658   19545 ssh_runner.go:195] Run: systemctl --version
	I0819 11:48:00.360480   19545 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0819 11:48:00.363143   19545 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0819 11:48:00.363201   19545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0819 11:48:00.366987   19545 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0819 11:48:00.372559   19545 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0819 11:48:00.372570   19545 start.go:495] detecting cgroup driver to use...
	I0819 11:48:00.372648   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:48:00.380366   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0819 11:48:00.383643   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:48:00.386965   19545 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:48:00.386989   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:48:00.390116   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:48:00.392968   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:48:00.395671   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:48:00.398897   19545 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:48:00.402208   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:48:00.405282   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:48:00.407983   19545 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:48:00.411233   19545 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:48:00.414195   19545 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:48:00.416935   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:00.496189   19545 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:48:00.506938   19545 start.go:495] detecting cgroup driver to use...
	I0819 11:48:00.507000   19545 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0819 11:48:00.512318   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:48:00.516956   19545 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 11:48:00.525482   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 11:48:00.530181   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:48:00.534637   19545 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 11:48:00.594497   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:48:00.598996   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:48:00.604453   19545 ssh_runner.go:195] Run: which cri-dockerd
	I0819 11:48:00.605734   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0819 11:48:00.608280   19545 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0819 11:48:00.613276   19545 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0819 11:48:00.693522   19545 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0819 11:48:00.770034   19545 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0819 11:48:00.770109   19545 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0819 11:48:00.775723   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:00.853763   19545 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:48:02.005741   19545 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151967333s)
	I0819 11:48:02.005799   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0819 11:48:02.010635   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:48:02.015151   19545 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0819 11:48:02.099678   19545 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0819 11:48:02.177760   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:02.255104   19545 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0819 11:48:02.261254   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0819 11:48:02.265437   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:02.341549   19545 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0819 11:48:02.379962   19545 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0819 11:48:02.380045   19545 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0819 11:48:02.383672   19545 start.go:563] Will wait 60s for crictl version
	I0819 11:48:02.383729   19545 ssh_runner.go:195] Run: which crictl
	I0819 11:48:02.385336   19545 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:48:02.400804   19545 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0819 11:48:02.400867   19545 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:48:02.417626   19545 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0819 11:48:02.437780   19545 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0819 11:48:02.437844   19545 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0819 11:48:02.439098   19545 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:48:02.443122   19545 kubeadm.go:883] updating cluster {Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0819 11:48:02.443165   19545 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0819 11:48:02.443207   19545 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:48:02.453809   19545 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:48:02.453817   19545 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:48:02.453870   19545 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:48:02.456782   19545 ssh_runner.go:195] Run: which lz4
	I0819 11:48:02.457948   19545 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0819 11:48:02.459168   19545 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0819 11:48:02.459181   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0819 11:48:03.419834   19545 docker.go:649] duration metric: took 961.927042ms to copy over tarball
	I0819 11:48:03.419895   19545 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0819 11:48:04.580733   19545 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.160824542s)
	I0819 11:48:04.580748   19545 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0819 11:48:04.596182   19545 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0819 11:48:04.599517   19545 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0819 11:48:04.604627   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:04.673355   19545 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0819 11:48:06.190127   19545 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.516762042s)
	I0819 11:48:06.190206   19545 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0819 11:48:06.207854   19545 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0819 11:48:06.207864   19545 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0819 11:48:06.207869   19545 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0819 11:48:06.211713   19545 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.213598   19545 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.215492   19545 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.215623   19545 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.217385   19545 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.217466   19545 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0819 11:48:06.218922   19545 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.218970   19545 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.219968   19545 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.220012   19545 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0819 11:48:06.221179   19545 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.221210   19545 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.222312   19545 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.222355   19545 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.223200   19545 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.223845   19545 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.670944   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.671444   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.676091   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0819 11:48:06.684378   19545 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0819 11:48:06.684538   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.693131   19545 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0819 11:48:06.693165   19545 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.693216   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0819 11:48:06.697082   19545 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0819 11:48:06.697098   19545 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.697139   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0819 11:48:06.701481   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.705315   19545 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0819 11:48:06.705339   19545 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0819 11:48:06.705388   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0819 11:48:06.713147   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.722529   19545 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0819 11:48:06.722554   19545 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.722611   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0819 11:48:06.724685   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.725980   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0819 11:48:06.726002   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0819 11:48:06.726103   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:48:06.737264   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0819 11:48:06.737396   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0819 11:48:06.737409   19545 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0819 11:48:06.737425   19545 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.737465   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0819 11:48:06.746613   19545 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0819 11:48:06.746634   19545 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.746691   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0819 11:48:06.751121   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0819 11:48:06.751244   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:48:06.753145   19545 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0819 11:48:06.753151   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0819 11:48:06.753166   19545 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.753173   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0819 11:48:06.753181   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0819 11:48:06.753193   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0819 11:48:06.753205   19545 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0819 11:48:06.772340   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0819 11:48:06.774673   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0819 11:48:06.774700   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0819 11:48:06.774715   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0819 11:48:06.787266   19545 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0819 11:48:06.787282   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0819 11:48:06.789611   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0819 11:48:06.860968   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0819 11:48:06.863217   19545 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0819 11:48:06.863314   19545 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.869527   19545 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0819 11:48:06.869538   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0819 11:48:06.900719   19545 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0819 11:48:06.900744   19545 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.900812   19545 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:48:06.963600   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0819 11:48:06.966581   19545 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0819 11:48:06.966702   19545 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:48:06.980011   19545 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0819 11:48:06.980043   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0819 11:48:07.046480   19545 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0819 11:48:07.046498   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0819 11:48:07.394228   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0819 11:48:07.394252   19545 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0819 11:48:07.394260   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0819 11:48:07.534750   19545 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0819 11:48:07.534790   19545 cache_images.go:92] duration metric: took 1.326920583s to LoadCachedImages
	W0819 11:48:07.534839   19545 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0819 11:48:07.534845   19545 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0819 11:48:07.534900   19545 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-604000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:48:07.534964   19545 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0819 11:48:07.548514   19545 cni.go:84] Creating CNI manager for ""
	I0819 11:48:07.548528   19545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:48:07.548536   19545 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:48:07.548544   19545 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-604000 NodeName:stopped-upgrade-604000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:48:07.548606   19545 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-604000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:48:07.548662   19545 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0819 11:48:07.551611   19545 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:48:07.551640   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:48:07.554748   19545 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0819 11:48:07.559909   19545 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:48:07.564981   19545 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0819 11:48:07.570045   19545 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0819 11:48:07.571382   19545 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:48:07.575272   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:48:07.652911   19545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:48:07.662383   19545 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000 for IP: 10.0.2.15
	I0819 11:48:07.662393   19545 certs.go:194] generating shared ca certs ...
	I0819 11:48:07.662402   19545 certs.go:226] acquiring lock for ca certs: {Name:mk011f5d2dbb88087ec73da4d5406de1c263092b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.662565   19545 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.key
	I0819 11:48:07.662609   19545 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.key
	I0819 11:48:07.662614   19545 certs.go:256] generating profile certs ...
	I0819 11:48:07.662677   19545 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.key
	I0819 11:48:07.662697   19545 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6
	I0819 11:48:07.662705   19545 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0819 11:48:07.743846   19545 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6 ...
	I0819 11:48:07.743862   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6: {Name:mkce586ba565d84314129b208c6d671e64385521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.744186   19545 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6 ...
	I0819 11:48:07.744195   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6: {Name:mkde12f695304baaf9217221c44d62f8633d153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.744333   19545 certs.go:381] copying /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt.287f64a6 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt
	I0819 11:48:07.746444   19545 certs.go:385] copying /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key.287f64a6 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key
	I0819 11:48:07.746603   19545 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/proxy-client.key
	I0819 11:48:07.746733   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654.pem (1338 bytes)
	W0819 11:48:07.746755   19545 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654_empty.pem, impossibly tiny 0 bytes
	I0819 11:48:07.746760   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:48:07.746798   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:48:07.746817   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:48:07.746835   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/key.pem (1679 bytes)
	I0819 11:48:07.746873   19545 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem (1708 bytes)
	I0819 11:48:07.747219   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:48:07.754279   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0819 11:48:07.761161   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:48:07.767798   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:48:07.774561   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 11:48:07.781585   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:48:07.788399   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:48:07.795352   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 11:48:07.802817   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/ssl/certs/176542.pem --> /usr/share/ca-certificates/176542.pem (1708 bytes)
	I0819 11:48:07.809955   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:48:07.816444   19545 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/17654.pem --> /usr/share/ca-certificates/17654.pem (1338 bytes)
	I0819 11:48:07.823341   19545 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:48:07.828679   19545 ssh_runner.go:195] Run: openssl version
	I0819 11:48:07.830545   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/176542.pem && ln -fs /usr/share/ca-certificates/176542.pem /etc/ssl/certs/176542.pem"
	I0819 11:48:07.833689   19545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/176542.pem
	I0819 11:48:07.835150   19545 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:32 /usr/share/ca-certificates/176542.pem
	I0819 11:48:07.835173   19545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/176542.pem
	I0819 11:48:07.837077   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/176542.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 11:48:07.840052   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:48:07.843408   19545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:48:07.844889   19545 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:48:07.844907   19545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:48:07.846599   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:48:07.849916   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17654.pem && ln -fs /usr/share/ca-certificates/17654.pem /etc/ssl/certs/17654.pem"
	I0819 11:48:07.852907   19545 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17654.pem
	I0819 11:48:07.854362   19545 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:32 /usr/share/ca-certificates/17654.pem
	I0819 11:48:07.854388   19545 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17654.pem
	I0819 11:48:07.856379   19545 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17654.pem /etc/ssl/certs/51391683.0"
	I0819 11:48:07.859770   19545 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:48:07.861382   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 11:48:07.863515   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 11:48:07.865588   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 11:48:07.867651   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 11:48:07.869564   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 11:48:07.871334   19545 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 11:48:07.873316   19545 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53361 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-604000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0819 11:48:07.873384   19545 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:48:07.883296   19545 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:48:07.886585   19545 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 11:48:07.886593   19545 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 11:48:07.886616   19545 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 11:48:07.889301   19545 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 11:48:07.889596   19545 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-604000" does not appear in /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:48:07.889688   19545 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-17178/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-604000" cluster setting kubeconfig missing "stopped-upgrade-604000" context setting]
	I0819 11:48:07.889877   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:48:07.890305   19545 kapi.go:59] client config for stopped-upgrade-604000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105aed990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:48:07.890629   19545 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 11:48:07.893159   19545 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-604000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0819 11:48:07.893168   19545 kubeadm.go:1160] stopping kube-system containers ...
	I0819 11:48:07.893205   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0819 11:48:07.903756   19545 docker.go:483] Stopping containers: [04973e14da79 07703ddc91e4 7a7ed811dead e935629bad41 e5fb176acee3 e9101e64955c 16596966724a bb9919797493]
	I0819 11:48:07.903818   19545 ssh_runner.go:195] Run: docker stop 04973e14da79 07703ddc91e4 7a7ed811dead e935629bad41 e5fb176acee3 e9101e64955c 16596966724a bb9919797493
	I0819 11:48:07.914827   19545 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0819 11:48:07.920653   19545 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:48:07.923663   19545 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:48:07.923669   19545 kubeadm.go:157] found existing configuration files:
	
	I0819 11:48:07.923691   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf
	I0819 11:48:07.926769   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:48:07.926792   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:48:07.929728   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf
	I0819 11:48:07.932150   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:48:07.932173   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:48:07.935225   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf
	I0819 11:48:07.938132   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:48:07.938154   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:48:07.940618   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf
	I0819 11:48:07.943489   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:48:07.943513   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:48:07.946511   19545 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:48:07.949257   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:07.969901   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.438752   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.572862   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.602887   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0819 11:48:08.631752   19545 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:48:08.631845   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:09.133158   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:09.633911   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:09.639216   19545 api_server.go:72] duration metric: took 1.007472458s to wait for apiserver process to appear ...
	I0819 11:48:09.639229   19545 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:48:09.639238   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:14.639783   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:14.639842   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:19.641288   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:19.641334   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:24.641757   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:24.641809   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:29.642354   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:29.642390   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:34.642898   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:34.642933   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:39.643658   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:39.643698   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:44.644955   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:44.645007   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:49.646661   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:49.646728   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:54.647475   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:54.647558   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:48:59.648908   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:48:59.648933   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:04.649447   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:04.649482   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:09.651761   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:09.652203   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:09.685001   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:09.685137   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:09.704680   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:09.704778   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:09.718731   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:09.718829   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:09.731460   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:09.731534   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:09.742526   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:09.742597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:09.754124   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:09.754203   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:09.764968   19545 logs.go:276] 0 containers: []
	W0819 11:49:09.764984   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:09.765039   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:09.776441   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:09.776462   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:09.776468   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:09.792476   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:09.792485   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:09.807756   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:09.807768   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:09.819835   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:09.819847   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:09.831004   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:09.831022   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:09.846370   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:09.846383   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:09.872374   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:09.872383   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:09.909495   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:09.909507   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:10.016164   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:10.016179   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:10.046241   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:10.046251   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:10.060208   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:10.060222   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:10.064474   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:10.064483   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:10.076080   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:10.076092   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:10.094365   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:10.094379   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:10.106499   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:10.106511   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:10.120419   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:10.120432   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:10.134552   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:10.134563   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:12.647593   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:17.649450   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:17.649531   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:17.660632   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:17.660704   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:17.671847   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:17.671926   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:17.682541   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:17.682612   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:17.693650   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:17.693717   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:17.704207   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:17.704276   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:17.714708   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:17.714778   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:17.729684   19545 logs.go:276] 0 containers: []
	W0819 11:49:17.729695   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:17.729753   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:17.740971   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:17.740990   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:17.740995   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:17.755518   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:17.755531   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:17.771109   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:17.771119   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:17.783085   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:17.783096   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:17.801763   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:17.801774   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:17.813969   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:17.813979   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:17.831394   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:17.831406   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:17.848914   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:17.848927   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:17.853561   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:17.853567   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:17.877751   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:17.877770   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:17.892655   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:17.892667   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:17.907730   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:17.907740   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:17.919730   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:17.919742   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:17.945243   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:17.945253   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:17.958414   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:17.958426   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:17.998859   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:17.998873   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:18.037593   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:18.037606   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:20.558348   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:25.502307   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:25.502567   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:25.529192   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:25.529309   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:25.545605   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:25.545688   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:25.558625   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:25.558699   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:25.570555   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:25.570628   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:25.581183   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:25.581255   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:25.592349   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:25.592424   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:25.602844   19545 logs.go:276] 0 containers: []
	W0819 11:49:25.602858   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:25.602912   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:25.613531   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:25.613552   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:25.613558   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:25.625699   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:25.625710   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:25.629900   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:25.629907   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:25.641314   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:25.641325   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:25.655496   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:25.655507   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:25.670268   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:25.670280   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:25.682846   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:25.682857   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:25.722454   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:25.722468   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:25.757058   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:25.757068   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:25.769043   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:25.769054   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:25.792909   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:25.792919   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:25.804593   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:25.804604   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:25.822621   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:25.822635   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:25.833993   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:25.834004   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:25.849875   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:25.849888   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:25.864054   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:25.864067   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:25.889325   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:25.889344   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:28.406079   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:33.408288   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:33.408410   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:33.419490   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:33.419565   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:33.430351   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:33.430417   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:33.440406   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:33.440473   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:33.451152   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:33.451219   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:33.461756   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:33.461826   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:33.476343   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:33.476415   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:33.488645   19545 logs.go:276] 0 containers: []
	W0819 11:49:33.488661   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:33.488720   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:33.499162   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:33.499179   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:33.499186   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:33.513267   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:33.513277   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:33.524112   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:33.524123   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:33.535558   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:33.535568   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:33.540095   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:33.540103   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:33.576521   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:33.576536   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:33.588643   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:33.588657   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:33.600406   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:33.600416   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:33.618036   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:33.618046   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:33.631191   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:33.631203   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:33.668497   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:33.668509   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:33.683443   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:33.683468   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:33.708300   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:33.708317   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:33.723524   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:33.723534   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:33.735023   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:33.735033   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:33.749648   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:33.749659   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:33.764359   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:33.764367   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:36.288507   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:41.290659   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:41.290911   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:41.305794   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:41.305870   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:41.321163   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:41.321225   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:41.331353   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:41.331422   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:41.341886   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:41.341955   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:41.351957   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:41.352024   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:41.362176   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:41.362243   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:41.381150   19545 logs.go:276] 0 containers: []
	W0819 11:49:41.381163   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:41.381220   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:41.393402   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:41.393421   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:41.393426   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:41.407362   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:41.407374   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:41.434731   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:41.434744   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:41.446360   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:41.446378   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:41.471609   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:41.471618   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:41.484691   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:41.484704   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:41.523307   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:41.523316   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:41.539354   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:41.539366   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:41.551116   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:41.551130   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:41.566079   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:41.566089   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:41.577202   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:41.577213   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:41.592527   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:41.592538   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:41.609842   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:41.609856   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:41.624620   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:41.624630   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:41.628879   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:41.628886   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:41.642903   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:41.642913   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:41.653996   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:41.654008   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:44.196826   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:49.197686   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:49.198052   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:49.230475   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:49.230598   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:49.252413   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:49.252490   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:49.266009   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:49.266090   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:49.277877   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:49.277948   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:49.290538   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:49.290606   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:49.301292   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:49.301358   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:49.311620   19545 logs.go:276] 0 containers: []
	W0819 11:49:49.311630   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:49.311680   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:49.326326   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:49.326346   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:49.326352   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:49.363242   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:49.363256   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:49.388070   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:49.388084   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:49.404241   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:49.404251   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:49.418795   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:49.418806   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:49.431178   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:49.431189   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:49.445883   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:49.445896   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:49.450251   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:49.450262   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:49.464557   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:49.464569   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:49.478962   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:49.478972   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:49.490838   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:49.490849   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:49.503565   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:49.503575   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:49.515218   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:49.515229   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:49.538923   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:49.538937   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:49.550683   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:49.550693   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:49.562295   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:49.562307   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:49.600427   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:49.600440   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:49:52.126880   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:49:57.129337   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:49:57.129680   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:49:57.165775   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:49:57.165901   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:49:57.184911   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:49:57.185007   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:49:57.201071   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:49:57.201146   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:49:57.213783   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:49:57.213849   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:49:57.225267   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:49:57.225334   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:49:57.236589   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:49:57.236659   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:49:57.252615   19545 logs.go:276] 0 containers: []
	W0819 11:49:57.252631   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:49:57.252689   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:49:57.271552   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:49:57.271571   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:49:57.271578   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:49:57.284519   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:49:57.284531   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:49:57.299228   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:49:57.299239   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:49:57.323890   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:49:57.323901   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:49:57.338083   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:49:57.338093   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:49:57.354020   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:49:57.354032   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:49:57.365482   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:49:57.365493   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:49:57.400867   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:49:57.400880   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:49:57.405080   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:49:57.405089   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:49:57.419198   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:49:57.419210   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:49:57.431397   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:49:57.431408   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:49:57.447172   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:49:57.447184   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:49:57.467015   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:49:57.467026   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:49:57.481414   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:49:57.481424   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:49:57.493287   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:49:57.493298   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:49:57.529801   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:49:57.529813   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:49:57.542088   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:49:57.542100   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:00.067795   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:05.070204   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:05.070753   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:05.106629   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:05.106769   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:05.128110   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:05.128206   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:05.142966   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:05.143042   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:05.155610   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:05.155682   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:05.166637   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:05.166703   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:05.179388   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:05.179455   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:05.189952   19545 logs.go:276] 0 containers: []
	W0819 11:50:05.189970   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:05.190032   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:05.200535   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:05.200553   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:05.200559   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:05.237966   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:05.237978   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:05.272764   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:05.272776   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:05.296154   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:05.296167   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:05.313304   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:05.313315   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:05.326953   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:05.326965   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:05.339271   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:05.339285   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:05.353561   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:05.353571   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:05.378654   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:05.378664   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:05.383361   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:05.383369   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:05.398390   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:05.398400   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:05.423452   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:05.423464   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:05.435079   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:05.435090   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:05.448483   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:05.448496   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:05.460866   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:05.460880   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:05.482202   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:05.482213   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:05.500169   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:05.500180   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:08.013095   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:13.015528   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:13.015687   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:13.030173   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:13.030259   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:13.041712   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:13.041790   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:13.058435   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:13.058504   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:13.072830   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:13.072898   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:13.083631   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:13.083692   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:13.095341   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:13.095408   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:13.105616   19545 logs.go:276] 0 containers: []
	W0819 11:50:13.105626   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:13.105678   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:13.120672   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:13.120691   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:13.120697   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:13.152922   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:13.152935   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:13.166829   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:13.166839   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:13.180803   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:13.180816   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:13.191760   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:13.191770   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:13.217351   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:13.217362   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:13.228821   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:13.228831   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:13.253510   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:13.253522   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:13.265678   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:13.265692   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:13.269944   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:13.269951   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:13.307594   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:13.307607   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:13.322672   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:13.322681   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:13.339661   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:13.339672   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:13.351173   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:13.351185   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:13.390464   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:13.390479   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:13.409607   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:13.409620   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:13.424144   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:13.424155   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:15.937644   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:20.940144   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:20.940387   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:20.960853   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:20.960950   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:20.975719   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:20.975798   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:20.989198   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:20.989290   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:20.999987   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:21.000059   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:21.010595   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:21.010670   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:21.021208   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:21.021281   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:21.031490   19545 logs.go:276] 0 containers: []
	W0819 11:50:21.031501   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:21.031553   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:21.041818   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:21.041834   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:21.041842   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:21.061098   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:21.061108   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:21.086203   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:21.086215   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:21.097658   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:21.097669   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:21.109302   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:21.109312   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:21.134378   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:21.134388   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:21.148681   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:21.148694   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:21.165948   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:21.165960   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:21.180056   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:21.180066   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:21.194903   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:21.194912   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:21.206299   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:21.206310   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:21.224380   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:21.224391   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:21.235132   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:21.235144   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:21.247176   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:21.247186   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:21.286735   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:21.286746   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:21.291466   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:21.291474   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:21.327117   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:21.327128   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:23.846649   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:28.848793   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:28.848962   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:28.861414   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:28.861495   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:28.872180   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:28.872253   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:28.882932   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:28.883006   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:28.893275   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:28.893346   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:28.909992   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:28.910058   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:28.920466   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:28.920537   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:28.930782   19545 logs.go:276] 0 containers: []
	W0819 11:50:28.930794   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:28.930850   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:28.941366   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:28.941385   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:28.941392   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:28.980123   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:28.980134   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:29.004762   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:29.004773   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:29.018944   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:29.018956   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:29.034683   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:29.034696   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:29.051807   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:29.051820   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:29.071119   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:29.071130   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:29.086581   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:29.086592   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:29.097887   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:29.097900   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:29.102413   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:29.102422   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:29.139393   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:29.139407   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:29.153896   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:29.153908   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:29.165482   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:29.165493   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:29.177050   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:29.177061   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:29.192416   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:29.192428   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:29.206169   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:29.206181   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:29.231557   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:29.231564   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:31.745011   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:36.747196   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:36.747418   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:36.762778   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:36.762869   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:36.774524   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:36.774601   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:36.785521   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:36.785592   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:36.796134   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:36.796199   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:36.810410   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:36.810468   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:36.820784   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:36.820852   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:36.832473   19545 logs.go:276] 0 containers: []
	W0819 11:50:36.832486   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:36.832548   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:36.843231   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:36.843249   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:36.843256   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:36.878362   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:36.878374   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:36.893178   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:36.893188   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:36.897367   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:36.897373   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:36.921277   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:36.921286   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:36.932528   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:36.932542   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:36.946251   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:36.946263   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:36.973051   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:36.973062   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:36.986555   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:36.986565   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:37.001081   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:37.001092   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:37.012851   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:37.012865   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:37.024451   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:37.024464   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:37.063183   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:37.063192   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:37.074493   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:37.074507   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:37.093094   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:37.093104   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:37.110971   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:37.110984   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:37.127087   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:37.127102   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:39.647147   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:44.649380   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:44.649533   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:44.661423   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:44.661494   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:44.672099   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:44.672173   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:44.682939   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:44.683009   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:44.697111   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:44.697182   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:44.715128   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:44.715192   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:44.725845   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:44.725915   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:44.736172   19545 logs.go:276] 0 containers: []
	W0819 11:50:44.736183   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:44.736235   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:44.746764   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:44.746783   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:44.746793   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:44.786548   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:44.786563   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:44.810834   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:44.810845   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:44.835905   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:44.835915   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:44.847317   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:44.847336   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:44.886311   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:44.886322   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:44.900597   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:44.900611   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:44.911966   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:44.911981   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:44.923284   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:44.923295   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:44.941062   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:44.941073   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:44.952631   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:44.952642   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:44.966495   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:44.966510   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:44.971150   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:44.971156   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:44.988992   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:44.989004   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:45.000476   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:45.000488   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:45.015518   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:45.015531   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:45.029725   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:45.029735   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:47.546129   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:50:52.546939   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:50:52.547339   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:50:52.581493   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:50:52.581636   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:50:52.603106   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:50:52.603205   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:50:52.617202   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:50:52.617283   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:50:52.633533   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:50:52.633607   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:50:52.644015   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:50:52.644084   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:50:52.661692   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:50:52.661758   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:50:52.671865   19545 logs.go:276] 0 containers: []
	W0819 11:50:52.671877   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:50:52.671936   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:50:52.687441   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:50:52.687460   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:50:52.687465   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:50:52.726011   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:50:52.726023   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:50:52.740339   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:50:52.740350   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:50:52.754912   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:50:52.754926   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:50:52.769868   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:50:52.769879   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:50:52.784014   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:50:52.784025   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:50:52.797777   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:50:52.797787   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:50:52.809503   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:50:52.809516   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:50:52.820649   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:50:52.820660   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:50:52.843366   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:50:52.843377   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:50:52.857129   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:50:52.857139   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:50:52.882269   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:50:52.882279   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:50:52.918943   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:50:52.918954   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:50:52.931476   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:50:52.931491   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:50:52.948517   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:50:52.948529   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:50:52.965098   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:50:52.965108   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:50:52.989390   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:50:52.989397   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:50:55.495805   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:00.496796   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:00.497210   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:00.527346   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:00.527472   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:00.545573   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:00.545669   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:00.559942   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:00.560013   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:00.572195   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:00.572278   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:00.582795   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:00.582868   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:00.595966   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:00.596037   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:00.606372   19545 logs.go:276] 0 containers: []
	W0819 11:51:00.606384   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:00.606444   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:00.617056   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:00.617073   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:00.617078   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:00.632499   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:00.632511   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:00.645264   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:00.645273   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:00.657277   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:00.657289   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:00.669617   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:00.669629   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:00.694511   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:00.694526   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:00.714435   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:00.714444   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:00.727376   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:00.727388   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:00.745520   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:00.745533   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:00.759335   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:00.759344   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:00.796601   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:00.796610   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:00.831765   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:00.831777   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:00.846510   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:00.846521   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:00.858412   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:00.858423   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:00.881739   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:00.881747   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:00.886160   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:00.886167   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:00.904241   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:00.904251   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:03.418043   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:08.420356   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:08.420737   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:08.454745   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:08.454881   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:08.472640   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:08.472722   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:08.486957   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:08.487030   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:08.499468   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:08.499532   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:08.510149   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:08.510215   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:08.525288   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:08.525350   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:08.542279   19545 logs.go:276] 0 containers: []
	W0819 11:51:08.542295   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:08.542353   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:08.552957   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:08.552974   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:08.552979   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:08.593552   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:08.593567   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:08.605242   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:08.605254   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:08.624269   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:08.624279   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:08.636072   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:08.636086   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:08.660220   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:08.660232   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:08.671064   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:08.671076   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:08.684018   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:08.684031   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:08.701029   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:08.701047   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:08.717842   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:08.717855   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:08.732445   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:08.732455   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:08.736425   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:08.736431   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:08.781263   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:08.781277   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:08.799490   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:08.799503   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:08.825972   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:08.825985   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:08.841287   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:08.841302   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:08.853776   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:08.853786   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:11.365700   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:16.368181   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:16.368597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:16.408193   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:16.408335   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:16.434724   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:16.434841   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:16.450189   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:16.450258   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:16.464441   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:16.464516   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:16.474848   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:16.474914   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:16.485901   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:16.485972   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:16.496461   19545 logs.go:276] 0 containers: []
	W0819 11:51:16.496474   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:16.496530   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:16.508160   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:16.508178   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:16.508184   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:16.531730   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:16.531740   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:16.543408   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:16.543418   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:16.557235   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:16.557246   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:16.573582   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:16.573593   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:16.589556   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:16.589571   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:16.602233   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:16.602244   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:16.640470   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:16.640482   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:16.655082   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:16.655094   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:16.673773   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:16.673783   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:16.678583   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:16.678589   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:16.715001   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:16.715011   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:16.741865   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:16.741878   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:16.754530   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:16.754546   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:16.769966   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:16.769978   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:16.783012   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:16.783023   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:16.798315   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:16.798332   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:19.318885   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:24.321152   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:24.321346   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:24.341525   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:24.341627   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:24.357058   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:24.357135   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:24.369525   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:24.369590   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:24.380966   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:24.381036   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:24.391395   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:24.391461   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:24.402432   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:24.402494   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:24.412407   19545 logs.go:276] 0 containers: []
	W0819 11:51:24.412419   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:24.412483   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:24.422783   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:24.422799   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:24.422805   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:24.435060   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:24.435071   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:24.446641   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:24.446653   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:24.483128   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:24.483140   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:24.497835   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:24.497844   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:24.512641   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:24.512651   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:24.530703   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:24.530714   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:24.545265   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:24.545277   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:24.549994   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:24.550004   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:24.566514   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:24.566527   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:24.578686   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:24.578699   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:24.597525   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:24.597539   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:24.624731   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:24.624745   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:24.639946   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:24.639954   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:24.652826   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:24.652837   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:24.678227   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:24.678245   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:24.691393   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:24.691410   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:27.234801   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:32.237354   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:32.237611   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:32.261161   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:32.261256   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:32.276380   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:32.276467   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:32.288439   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:32.288514   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:32.299172   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:32.299252   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:32.309569   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:32.309636   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:32.319832   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:32.319904   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:32.329711   19545 logs.go:276] 0 containers: []
	W0819 11:51:32.329722   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:32.329781   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:32.340057   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:32.340076   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:32.340082   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:32.374724   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:32.374735   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:32.388640   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:32.388655   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:32.426779   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:32.426795   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:32.431318   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:32.431331   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:32.444323   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:32.444334   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:32.457021   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:32.457036   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:32.481030   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:32.481049   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:32.495838   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:32.495857   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:32.511696   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:32.511705   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:32.523983   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:32.523996   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:32.544629   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:32.544644   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:32.557296   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:32.557307   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:32.569969   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:32.569982   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:32.600740   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:32.600759   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:32.613685   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:32.613697   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:32.638663   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:32.638673   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:35.156569   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:40.158901   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:40.159278   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:40.192851   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:40.192977   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:40.213641   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:40.213756   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:40.228046   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:40.228123   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:40.244136   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:40.244208   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:40.254451   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:40.254516   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:40.265858   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:40.265939   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:40.276530   19545 logs.go:276] 0 containers: []
	W0819 11:51:40.276541   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:40.276597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:40.287813   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:40.287831   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:40.287837   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:40.304249   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:40.304265   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:40.316519   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:40.316532   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:40.329225   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:40.329243   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:40.368344   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:40.368354   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:40.394984   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:40.394999   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:40.407734   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:40.407746   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:40.448054   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:40.448067   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:40.463126   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:40.463138   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:40.479116   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:40.479163   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:40.494883   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:40.494896   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:40.513813   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:40.513826   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:40.538142   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:40.538155   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:40.562049   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:40.562057   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:40.566975   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:40.566987   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:40.580241   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:40.580253   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:40.594806   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:40.594819   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:43.113070   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:48.115293   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:48.115656   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:48.147180   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:48.147317   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:48.166509   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:48.166602   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:48.182045   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:48.182122   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:48.195305   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:48.195375   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:48.207739   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:48.207812   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:48.219438   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:48.219509   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:48.239916   19545 logs.go:276] 0 containers: []
	W0819 11:51:48.239928   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:48.239986   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:48.255584   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:48.255603   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:48.255609   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:48.268302   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:48.268314   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:48.284169   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:48.284183   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:48.308736   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:48.308751   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:48.338381   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:48.338393   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:48.357898   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:48.357910   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:48.373147   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:48.373158   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:48.385571   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:48.385582   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:48.424817   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:48.424829   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:48.447028   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:48.447040   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:48.465875   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:48.465887   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:48.478502   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:48.478515   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:48.496436   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:48.496445   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:48.501305   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:48.501316   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:48.539190   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:48.539202   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:48.551438   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:48.551452   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:48.563799   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:48.563810   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:51.076119   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:51:56.078489   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:51:56.078656   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:51:56.112919   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:51:56.113004   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:51:56.129090   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:51:56.129159   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:51:56.148985   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:51:56.149053   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:51:56.161392   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:51:56.161492   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:51:56.183322   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:51:56.183386   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:51:56.199604   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:51:56.199669   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:51:56.211044   19545 logs.go:276] 0 containers: []
	W0819 11:51:56.211054   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:51:56.211112   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:51:56.223509   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:51:56.223528   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:51:56.223534   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:51:56.261640   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:51:56.261653   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:51:56.276884   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:51:56.276896   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:51:56.289298   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:51:56.289310   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:51:56.303493   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:51:56.303501   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:51:56.316128   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:51:56.316144   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:51:56.330564   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:51:56.330576   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:51:56.371520   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:51:56.371537   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:51:56.388278   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:51:56.388292   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:51:56.400386   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:51:56.400400   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:51:56.418680   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:51:56.418698   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:51:56.434925   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:51:56.434943   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:51:56.439573   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:51:56.439582   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:51:56.454354   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:51:56.454366   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:51:56.480747   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:51:56.480766   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:51:56.492211   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:51:56.492223   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:51:56.507653   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:51:56.507664   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:51:59.031486   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:04.033212   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:04.033325   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:52:04.048797   19545 logs.go:276] 2 containers: [af251a7e4dc2 04973e14da79]
	I0819 11:52:04.048871   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:52:04.060627   19545 logs.go:276] 2 containers: [605f9b171d46 07703ddc91e4]
	I0819 11:52:04.060701   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:52:04.073013   19545 logs.go:276] 1 containers: [088a7d76a3fd]
	I0819 11:52:04.073091   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:52:04.084720   19545 logs.go:276] 2 containers: [bdb8b0d63638 7a7ed811dead]
	I0819 11:52:04.084790   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:52:04.095825   19545 logs.go:276] 1 containers: [03dd95625e48]
	I0819 11:52:04.095896   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:52:04.111442   19545 logs.go:276] 2 containers: [d9195d59990e e935629bad41]
	I0819 11:52:04.111512   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:52:04.122987   19545 logs.go:276] 0 containers: []
	W0819 11:52:04.122999   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:52:04.123058   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:52:04.134055   19545 logs.go:276] 2 containers: [fd0be420d5f9 02af235bd51f]
	I0819 11:52:04.134072   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:52:04.134078   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:52:04.138716   19545 logs.go:123] Gathering logs for etcd [605f9b171d46] ...
	I0819 11:52:04.138721   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 605f9b171d46"
	I0819 11:52:04.154361   19545 logs.go:123] Gathering logs for etcd [07703ddc91e4] ...
	I0819 11:52:04.154369   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07703ddc91e4"
	I0819 11:52:04.169271   19545 logs.go:123] Gathering logs for coredns [088a7d76a3fd] ...
	I0819 11:52:04.169282   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 088a7d76a3fd"
	I0819 11:52:04.183237   19545 logs.go:123] Gathering logs for kube-scheduler [7a7ed811dead] ...
	I0819 11:52:04.183250   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a7ed811dead"
	I0819 11:52:04.199112   19545 logs.go:123] Gathering logs for kube-proxy [03dd95625e48] ...
	I0819 11:52:04.199129   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03dd95625e48"
	I0819 11:52:04.212532   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:52:04.212544   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:52:04.235239   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:52:04.235248   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:52:04.248487   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:52:04.248503   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:52:04.289896   19545 logs.go:123] Gathering logs for kube-apiserver [04973e14da79] ...
	I0819 11:52:04.289909   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04973e14da79"
	I0819 11:52:04.323526   19545 logs.go:123] Gathering logs for kube-controller-manager [e935629bad41] ...
	I0819 11:52:04.323544   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e935629bad41"
	I0819 11:52:04.338910   19545 logs.go:123] Gathering logs for storage-provisioner [02af235bd51f] ...
	I0819 11:52:04.338924   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02af235bd51f"
	I0819 11:52:04.350403   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:52:04.350415   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:52:04.384901   19545 logs.go:123] Gathering logs for kube-apiserver [af251a7e4dc2] ...
	I0819 11:52:04.384911   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af251a7e4dc2"
	I0819 11:52:04.400534   19545 logs.go:123] Gathering logs for kube-scheduler [bdb8b0d63638] ...
	I0819 11:52:04.400544   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb8b0d63638"
	I0819 11:52:04.413023   19545 logs.go:123] Gathering logs for kube-controller-manager [d9195d59990e] ...
	I0819 11:52:04.413033   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9195d59990e"
	I0819 11:52:04.430539   19545 logs.go:123] Gathering logs for storage-provisioner [fd0be420d5f9] ...
	I0819 11:52:04.430550   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0be420d5f9"
	I0819 11:52:06.942291   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:11.944681   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:11.944714   19545 kubeadm.go:597] duration metric: took 4m4.119914s to restartPrimaryControlPlane
	W0819 11:52:11.944746   19545 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0819 11:52:11.944760   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0819 11:52:12.957291   19545 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.012541916s)
	I0819 11:52:12.957349   19545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:52:12.962734   19545 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:52:12.965726   19545 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:52:12.968455   19545 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:52:12.968461   19545 kubeadm.go:157] found existing configuration files:
	
	I0819 11:52:12.968487   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf
	I0819 11:52:12.971045   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:52:12.971073   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:52:12.973646   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf
	I0819 11:52:12.976909   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:52:12.976937   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:52:12.980527   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf
	I0819 11:52:12.983635   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:52:12.983660   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:52:12.986261   19545 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf
	I0819 11:52:12.989136   19545 kubeadm.go:163] "https://control-plane.minikube.internal:53361" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53361 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:52:12.989160   19545 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:52:12.992542   19545 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0819 11:52:13.011535   19545 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0819 11:52:13.011565   19545 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:52:13.060079   19545 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:52:13.060193   19545 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:52:13.060251   19545 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0819 11:52:13.109044   19545 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:52:13.117205   19545 out.go:235]   - Generating certificates and keys ...
	I0819 11:52:13.117239   19545 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:52:13.117272   19545 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:52:13.117309   19545 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0819 11:52:13.117336   19545 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0819 11:52:13.117375   19545 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0819 11:52:13.117401   19545 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0819 11:52:13.117436   19545 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0819 11:52:13.117467   19545 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0819 11:52:13.117502   19545 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0819 11:52:13.117542   19545 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0819 11:52:13.117566   19545 kubeadm.go:310] [certs] Using the existing "sa" key
	I0819 11:52:13.117598   19545 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:52:13.167194   19545 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:52:13.247252   19545 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:52:13.304243   19545 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:52:13.371017   19545 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:52:13.399052   19545 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:52:13.399468   19545 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:52:13.399496   19545 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:52:13.486404   19545 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:52:13.493583   19545 out.go:235]   - Booting up control plane ...
	I0819 11:52:13.493639   19545 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:52:13.493684   19545 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:52:13.493719   19545 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:52:13.493757   19545 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:52:13.493841   19545 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0819 11:52:17.991648   19545 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501221 seconds
	I0819 11:52:17.991712   19545 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:52:17.996796   19545 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:52:18.521560   19545 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:52:18.521816   19545 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-604000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:52:19.027734   19545 kubeadm.go:310] [bootstrap-token] Using token: l3au5v.8norsn0i1fxpzhal
	I0819 11:52:19.033450   19545 out.go:235]   - Configuring RBAC rules ...
	I0819 11:52:19.033518   19545 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:52:19.033564   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:52:19.035471   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:52:19.037169   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:52:19.038146   19545 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:52:19.039036   19545 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:52:19.042528   19545 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:52:19.214216   19545 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:52:19.431859   19545 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:52:19.432499   19545 kubeadm.go:310] 
	I0819 11:52:19.432534   19545 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:52:19.432536   19545 kubeadm.go:310] 
	I0819 11:52:19.432585   19545 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:52:19.432591   19545 kubeadm.go:310] 
	I0819 11:52:19.432604   19545 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:52:19.432663   19545 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:52:19.432697   19545 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:52:19.432704   19545 kubeadm.go:310] 
	I0819 11:52:19.432731   19545 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:52:19.432734   19545 kubeadm.go:310] 
	I0819 11:52:19.432766   19545 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:52:19.432768   19545 kubeadm.go:310] 
	I0819 11:52:19.432797   19545 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:52:19.432839   19545 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:52:19.432881   19545 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:52:19.432887   19545 kubeadm.go:310] 
	I0819 11:52:19.432944   19545 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:52:19.432985   19545 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:52:19.432990   19545 kubeadm.go:310] 
	I0819 11:52:19.433031   19545 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l3au5v.8norsn0i1fxpzhal \
	I0819 11:52:19.433082   19545 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae \
	I0819 11:52:19.433104   19545 kubeadm.go:310] 	--control-plane 
	I0819 11:52:19.433110   19545 kubeadm.go:310] 
	I0819 11:52:19.433156   19545 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:52:19.433160   19545 kubeadm.go:310] 
	I0819 11:52:19.433205   19545 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l3au5v.8norsn0i1fxpzhal \
	I0819 11:52:19.433272   19545 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:de6b00bb5195e85ea9c628edaf5f990495686095194c39efa1f1f29b580598ae 
	I0819 11:52:19.433365   19545 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:52:19.433466   19545 cni.go:84] Creating CNI manager for ""
	I0819 11:52:19.433475   19545 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:52:19.440406   19545 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0819 11:52:19.444564   19545 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0819 11:52:19.447882   19545 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0819 11:52:19.452644   19545 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:52:19.452704   19545 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:52:19.452714   19545 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-604000 minikube.k8s.io/updated_at=2024_08_19T11_52_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=61bea45f08282fbfcbc51a63f2fbf5fa5e7e26a8 minikube.k8s.io/name=stopped-upgrade-604000 minikube.k8s.io/primary=true
	I0819 11:52:19.455959   19545 ops.go:34] apiserver oom_adj: -16
	I0819 11:52:19.502376   19545 kubeadm.go:1113] duration metric: took 49.714375ms to wait for elevateKubeSystemPrivileges
	I0819 11:52:19.502391   19545 kubeadm.go:394] duration metric: took 4m11.691038125s to StartCluster
	I0819 11:52:19.502402   19545 settings.go:142] acquiring lock: {Name:mkd10d56bae48d75d53289d9920be83758fb5ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:52:19.502490   19545 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:52:19.502919   19545 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/kubeconfig: {Name:mkcd8a4d29cc5f324e197a69fe511a87d17c54d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:52:19.503130   19545 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:52:19.503143   19545 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 11:52:19.503177   19545 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-604000"
	I0819 11:52:19.503181   19545 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-604000"
	I0819 11:52:19.503191   19545 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-604000"
	W0819 11:52:19.503194   19545 addons.go:243] addon storage-provisioner should already be in state true
	I0819 11:52:19.503195   19545 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-604000"
	I0819 11:52:19.503208   19545 host.go:66] Checking if "stopped-upgrade-604000" exists ...
	I0819 11:52:19.503224   19545 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:52:19.506401   19545 out.go:177] * Verifying Kubernetes components...
	I0819 11:52:19.507025   19545 kapi.go:59] client config for stopped-upgrade-604000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/stopped-upgrade-604000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-17178/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105aed990), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 11:52:19.509844   19545 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-604000"
	W0819 11:52:19.509848   19545 addons.go:243] addon default-storageclass should already be in state true
	I0819 11:52:19.509857   19545 host.go:66] Checking if "stopped-upgrade-604000" exists ...
	I0819 11:52:19.510403   19545 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:52:19.510408   19545 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:52:19.510413   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:52:19.513243   19545 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:52:19.517371   19545 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:52:19.521473   19545 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:52:19.521478   19545 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:52:19.521484   19545 sshutil.go:53] new ssh client: &{IP:localhost Port:53326 SSHKeyPath:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/stopped-upgrade-604000/id_rsa Username:docker}
	I0819 11:52:19.589570   19545 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:52:19.594743   19545 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:52:19.594793   19545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:52:19.598683   19545 api_server.go:72] duration metric: took 95.544958ms to wait for apiserver process to appear ...
	I0819 11:52:19.598691   19545 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:52:19.598699   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:19.631027   19545 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:52:19.647291   19545 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:52:20.017350   19545 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 11:52:20.017363   19545 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 11:52:24.600739   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:24.600784   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:29.600946   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:29.600975   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:34.601206   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:34.601253   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:39.601873   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:39.601913   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:44.602554   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:44.602599   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:49.603359   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:49.603400   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0819 11:52:50.019129   19545 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0819 11:52:50.024518   19545 out.go:177] * Enabled addons: storage-provisioner
	I0819 11:52:50.035396   19545 addons.go:510] duration metric: took 30.532908s for enable addons: enabled=[storage-provisioner]
	I0819 11:52:54.604470   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:54.604566   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:52:59.606453   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:52:59.606475   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:04.607833   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:04.607876   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:09.610107   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:09.610144   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:14.612320   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:14.612360   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:19.614611   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:19.614858   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:53:19.647956   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:53:19.648077   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:53:19.662018   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:53:19.662089   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:53:19.672740   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:53:19.672809   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:53:19.684666   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:53:19.684741   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:53:19.694932   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:53:19.694997   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:53:19.705405   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:53:19.705465   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:53:19.715794   19545 logs.go:276] 0 containers: []
	W0819 11:53:19.715806   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:53:19.715863   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:53:19.730569   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:53:19.730586   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:53:19.730592   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:53:19.735229   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:53:19.735238   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:53:19.770184   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:53:19.770194   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:53:19.782291   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:53:19.782302   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:53:19.794324   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:53:19.794334   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:53:19.805773   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:53:19.805782   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:19.816977   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:53:19.816999   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:53:19.852554   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:53:19.852571   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:53:19.868033   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:53:19.868042   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:53:19.882260   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:53:19.882269   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:53:19.894294   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:53:19.894304   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:53:19.909309   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:53:19.909318   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:53:19.926556   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:53:19.926568   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:53:22.451678   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:27.453894   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:27.454011   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:53:27.465033   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:53:27.465098   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:53:27.475831   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:53:27.475896   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:53:27.486303   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:53:27.486369   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:53:27.498088   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:53:27.498157   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:53:27.508620   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:53:27.508683   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:53:27.518916   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:53:27.518984   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:53:27.529226   19545 logs.go:276] 0 containers: []
	W0819 11:53:27.529239   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:53:27.529297   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:53:27.539696   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:53:27.539715   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:53:27.539721   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:53:27.575052   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:53:27.575061   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:53:27.609640   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:53:27.609651   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:53:27.623552   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:53:27.623565   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:53:27.642010   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:53:27.642024   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:53:27.653901   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:53:27.653914   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:53:27.670969   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:53:27.670979   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:53:27.695373   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:53:27.695382   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:53:27.699734   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:53:27.699741   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:53:27.711945   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:53:27.711958   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:53:27.723754   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:53:27.723766   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:53:27.741606   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:53:27.741616   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:53:27.753019   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:53:27.753029   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:30.267164   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:35.269400   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:35.269586   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:53:35.287119   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:53:35.287197   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:53:35.303115   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:53:35.303192   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:53:35.314871   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:53:35.314944   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:53:35.325273   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:53:35.325338   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:53:35.343847   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:53:35.343915   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:53:35.358933   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:53:35.359000   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:53:35.368958   19545 logs.go:276] 0 containers: []
	W0819 11:53:35.368971   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:53:35.369028   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:53:35.379425   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:53:35.379439   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:53:35.379445   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:53:35.403802   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:53:35.403810   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:35.415362   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:53:35.415377   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:53:35.429252   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:53:35.429262   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:53:35.443272   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:53:35.443285   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:53:35.455845   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:53:35.455856   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:53:35.468371   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:53:35.468383   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:53:35.479762   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:53:35.479772   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:53:35.494342   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:53:35.494351   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:53:35.511851   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:53:35.511861   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:53:35.547316   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:53:35.547327   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:53:35.552433   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:53:35.552441   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:53:35.587872   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:53:35.587885   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:53:38.099622   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:43.100140   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:43.100674   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:53:43.140627   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:53:43.140750   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:53:43.161561   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:53:43.161634   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:53:43.176361   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:53:43.176419   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:53:43.189932   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:53:43.190003   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:53:43.200440   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:53:43.200514   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:53:43.211258   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:53:43.211316   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:53:43.221262   19545 logs.go:276] 0 containers: []
	W0819 11:53:43.221274   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:53:43.221331   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:53:43.231717   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:53:43.231734   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:53:43.231739   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:53:43.243783   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:53:43.243792   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:53:43.255240   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:53:43.255253   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:43.267030   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:53:43.267042   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:53:43.278707   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:53:43.278719   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:53:43.283662   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:53:43.283671   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:53:43.330599   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:53:43.330612   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:53:43.345244   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:53:43.345256   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:53:43.359074   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:53:43.359084   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:53:43.370315   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:53:43.370328   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:53:43.388635   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:53:43.388650   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:53:43.406093   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:53:43.406104   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:53:43.441327   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:53:43.441350   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:53:45.968187   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:50.970606   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:50.970981   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:53:51.010073   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:53:51.010187   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:53:51.031586   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:53:51.031691   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:53:51.046947   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:53:51.047012   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:53:51.059803   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:53:51.059876   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:53:51.070071   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:53:51.070139   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:53:51.080627   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:53:51.080691   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:53:51.091095   19545 logs.go:276] 0 containers: []
	W0819 11:53:51.091106   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:53:51.091155   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:53:51.106931   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:53:51.106946   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:53:51.106950   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:53:51.118328   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:53:51.118338   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:53:51.142952   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:53:51.142960   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:53:51.176934   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:53:51.176941   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:53:51.211625   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:53:51.211637   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:53:51.226252   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:53:51.226265   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:53:51.239734   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:53:51.239744   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:53:51.251920   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:53:51.251932   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:53:51.269436   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:53:51.269446   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:51.280815   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:53:51.280829   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:53:51.285254   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:53:51.285263   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:53:51.296842   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:53:51.296853   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:53:51.309704   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:53:51.309715   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:53:53.825837   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:53:58.828538   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:53:58.829120   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:53:58.871285   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:53:58.871413   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:53:58.893356   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:53:58.893451   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:53:58.908157   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:53:58.908222   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:53:58.920068   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:53:58.920149   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:53:58.931305   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:53:58.931379   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:53:58.941789   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:53:58.941853   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:53:58.951686   19545 logs.go:276] 0 containers: []
	W0819 11:53:58.951697   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:53:58.951749   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:53:58.962093   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:53:58.962110   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:53:58.962115   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:53:58.973730   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:53:58.973743   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:53:58.990641   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:53:58.990656   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:53:59.002205   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:53:59.002217   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:53:59.025321   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:53:59.025331   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:53:59.029360   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:53:59.029368   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:53:59.067048   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:53:59.067061   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:53:59.081140   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:53:59.081152   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:53:59.092736   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:53:59.092747   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:53:59.103744   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:53:59.103758   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:53:59.138649   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:53:59.138656   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:53:59.156799   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:53:59.156812   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:53:59.167646   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:53:59.167658   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:01.683707   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:06.686414   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:06.686841   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:06.724998   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:06.725137   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:06.745531   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:06.745641   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:06.760270   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:54:06.760347   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:06.772907   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:06.772977   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:06.783613   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:06.783678   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:06.794014   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:06.794081   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:06.803973   19545 logs.go:276] 0 containers: []
	W0819 11:54:06.803982   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:06.804034   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:06.814847   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:06.814862   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:06.814870   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:06.825844   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:06.825853   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:06.842536   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:06.842546   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:06.854120   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:06.854130   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:06.886606   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:06.886613   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:06.890979   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:06.890985   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:06.905253   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:06.905262   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:06.919147   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:06.919155   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:06.933881   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:06.933891   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:06.967608   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:06.967616   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:06.983575   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:06.983586   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:06.994907   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:06.994918   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:07.007474   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:07.007483   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:09.534474   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:14.536752   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:14.536960   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:14.556429   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:14.556518   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:14.570130   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:14.570207   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:14.581676   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:54:14.581747   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:14.592263   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:14.592322   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:14.602309   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:14.602370   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:14.612895   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:14.612962   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:14.625312   19545 logs.go:276] 0 containers: []
	W0819 11:54:14.625324   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:14.625383   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:14.642169   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:14.642186   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:14.642191   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:14.680272   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:14.680282   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:14.694630   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:14.694644   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:14.709058   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:14.709069   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:14.720679   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:14.720688   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:14.732027   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:14.732037   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:14.746389   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:14.746404   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:14.758586   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:14.758597   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:14.791956   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:14.791966   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:14.796233   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:14.796241   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:14.810592   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:14.810606   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:14.823691   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:14.823703   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:14.841235   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:14.841246   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:17.367641   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:22.370245   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:22.370533   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:22.396338   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:22.396456   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:22.413151   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:22.413234   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:22.426899   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:54:22.426970   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:22.438744   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:22.438812   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:22.450612   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:22.450686   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:22.460658   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:22.460716   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:22.470871   19545 logs.go:276] 0 containers: []
	W0819 11:54:22.470883   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:22.470929   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:22.480823   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:22.480841   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:22.480847   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:22.485505   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:22.485517   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:22.520424   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:22.520435   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:22.534998   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:22.535008   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:22.546181   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:22.546195   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:22.571252   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:22.571259   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:22.605614   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:22.605623   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:22.619418   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:22.619429   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:22.631264   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:22.631277   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:22.642700   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:22.642715   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:22.658553   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:22.658564   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:22.673319   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:22.673329   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:22.700110   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:22.700121   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:25.212137   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:30.214859   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:30.215322   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:30.263280   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:30.263417   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:30.283984   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:30.284060   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:30.298466   19545 logs.go:276] 2 containers: [079939a9b1a7 5ec50c57271c]
	I0819 11:54:30.298540   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:30.310848   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:30.310915   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:30.322020   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:30.322094   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:30.332473   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:30.332543   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:30.342823   19545 logs.go:276] 0 containers: []
	W0819 11:54:30.342833   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:30.342881   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:30.353485   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:30.353498   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:30.353502   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:30.368061   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:30.368074   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:30.379756   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:30.379767   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:30.399450   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:30.399462   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:30.411130   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:30.411143   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:30.415796   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:30.415806   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:30.452678   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:30.452688   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:30.470998   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:30.471010   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:30.485349   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:30.485360   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:30.496969   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:30.496979   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:30.529163   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:30.529172   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:30.541124   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:30.541133   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:30.552775   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:30.552785   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:33.077693   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:38.080342   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:38.080697   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:38.119134   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:38.119264   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:38.140313   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:38.140423   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:38.156073   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:54:38.156154   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:38.168660   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:38.168723   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:38.179679   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:38.179744   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:38.190321   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:38.190376   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:38.200634   19545 logs.go:276] 0 containers: []
	W0819 11:54:38.200646   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:38.200699   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:38.211328   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:38.211344   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:38.211349   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:38.222942   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:38.222955   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:38.261699   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:54:38.261711   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:54:38.272519   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:38.272530   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:38.287190   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:54:38.287203   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:54:38.298581   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:38.298591   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:38.331601   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:38.331611   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:38.335923   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:38.335931   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:38.348341   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:38.348354   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:38.373587   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:38.373598   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:38.385446   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:38.385460   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:38.403237   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:38.403249   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:38.414926   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:38.414939   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:38.430610   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:38.430621   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:38.447428   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:38.447438   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:40.969271   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:45.971982   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:45.972428   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:46.010788   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:46.010916   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:46.032432   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:46.032522   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:46.048212   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:54:46.048292   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:46.060432   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:46.060497   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:46.071102   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:46.071170   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:46.084727   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:46.084796   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:46.094802   19545 logs.go:276] 0 containers: []
	W0819 11:54:46.094816   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:46.094875   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:46.105264   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:46.105282   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:46.105288   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:46.119388   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:46.119400   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:46.143303   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:46.143311   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:46.178459   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:46.178473   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:46.191421   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:46.191434   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:46.205338   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:46.205348   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:46.217866   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:46.217878   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:46.235348   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:54:46.235358   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:54:46.246985   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:46.246998   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:46.261020   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:54:46.261029   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:54:46.271975   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:46.271984   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:46.286918   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:46.286933   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:46.291565   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:46.291574   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:46.305962   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:46.305975   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:46.317815   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:46.317827   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:48.852508   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:54:53.855259   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:54:53.855677   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:54:53.906927   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:54:53.907043   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:54:53.926752   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:54:53.926837   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:54:53.941537   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:54:53.941611   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:54:53.953602   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:54:53.953670   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:54:53.964000   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:54:53.964059   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:54:53.974651   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:54:53.974721   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:54:53.986471   19545 logs.go:276] 0 containers: []
	W0819 11:54:53.986482   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:54:53.986534   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:54:53.997538   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:54:53.997554   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:54:53.997560   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:54:54.002046   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:54:54.002054   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:54:54.016722   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:54:54.016735   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:54:54.028495   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:54:54.028507   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:54:54.042765   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:54:54.042777   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:54:54.060150   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:54:54.060161   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:54:54.094008   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:54:54.094016   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:54:54.108123   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:54:54.108135   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:54:54.119305   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:54:54.119318   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:54:54.131058   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:54:54.131070   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:54:54.142575   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:54:54.142587   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:54:54.184753   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:54:54.184764   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:54:54.200765   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:54:54.200776   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:54:54.224317   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:54:54.224323   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:54:54.235632   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:54:54.235640   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:54:56.749168   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:01.749919   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:01.749982   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:01.761987   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:01.762049   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:01.773768   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:01.773832   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:01.785813   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:01.785866   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:01.796578   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:01.796637   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:01.808389   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:01.808436   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:01.820539   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:01.820594   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:01.831017   19545 logs.go:276] 0 containers: []
	W0819 11:55:01.831027   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:01.831075   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:01.842168   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:01.842186   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:01.842192   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:01.854959   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:01.854970   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:01.868809   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:01.868820   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:01.903924   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:01.903941   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:01.949778   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:01.949789   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:01.967321   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:01.967333   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:01.979177   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:01.979187   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:02.004344   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:02.004360   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:02.009064   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:02.009075   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:02.023784   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:02.023795   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:02.036848   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:02.036858   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:02.056592   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:02.056602   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:02.068532   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:02.068543   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:02.080985   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:02.080996   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:02.094957   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:02.094968   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:04.612710   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:09.615108   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:09.615485   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:09.671971   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:09.672065   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:09.686931   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:09.687024   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:09.699039   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:09.699104   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:09.709681   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:09.709755   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:09.724144   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:09.724213   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:09.734469   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:09.734535   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:09.744690   19545 logs.go:276] 0 containers: []
	W0819 11:55:09.744700   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:09.744750   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:09.760387   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:09.760401   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:09.760406   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:09.774993   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:09.775003   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:09.779411   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:09.779418   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:09.791310   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:09.791321   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:09.805508   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:09.805519   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:09.817199   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:09.817213   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:09.851744   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:09.851754   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:09.885704   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:09.885718   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:09.897700   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:09.897713   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:09.908724   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:09.908733   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:09.924431   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:09.924444   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:09.939230   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:09.939239   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:09.951212   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:09.951224   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:09.976889   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:09.976900   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:09.988604   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:09.988615   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:12.507577   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:17.510179   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:17.510579   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:17.546094   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:17.546252   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:17.566614   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:17.566719   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:17.581113   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:17.581194   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:17.593168   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:17.593233   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:17.603604   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:17.603674   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:17.614328   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:17.614386   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:17.624582   19545 logs.go:276] 0 containers: []
	W0819 11:55:17.624595   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:17.624652   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:17.635322   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:17.635338   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:17.635346   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:17.669669   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:17.669681   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:17.684183   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:17.684194   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:17.695861   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:17.695873   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:17.707871   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:17.707884   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:17.720613   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:17.720621   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:17.735556   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:17.735567   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:17.747108   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:17.747120   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:17.779764   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:17.779771   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:17.801189   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:17.801200   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:17.812634   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:17.812648   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:17.835864   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:17.835875   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:17.839789   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:17.839797   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:17.851068   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:17.851080   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:17.864307   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:17.864316   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:20.394523   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:25.397060   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:25.397133   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:25.411424   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:25.411467   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:25.423279   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:25.423341   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:25.434855   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:25.434921   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:25.448415   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:25.448468   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:25.459440   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:25.459491   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:25.470567   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:25.470626   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:25.482792   19545 logs.go:276] 0 containers: []
	W0819 11:55:25.482806   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:25.482854   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:25.494723   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:25.494744   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:25.494750   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:25.508869   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:25.508880   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:25.521865   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:25.521875   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:25.541546   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:25.541556   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:25.546160   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:25.546170   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:25.558264   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:25.558277   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:25.571488   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:25.571499   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:25.586298   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:25.586307   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:25.619943   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:25.619959   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:25.657566   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:25.657582   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:25.670492   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:25.670504   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:25.682591   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:25.682604   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:25.698142   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:25.698157   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:25.711318   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:25.711330   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:25.726892   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:25.726900   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:28.254498   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:33.256852   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:33.257340   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:33.301451   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:33.301583   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:33.323970   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:33.324064   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:33.338520   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:33.338597   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:33.350439   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:33.350501   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:33.361023   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:33.361090   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:33.371441   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:33.371506   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:33.381422   19545 logs.go:276] 0 containers: []
	W0819 11:55:33.381436   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:33.381493   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:33.396184   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:33.396204   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:33.396209   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:33.407934   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:33.407946   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:33.419622   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:33.419636   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:33.436695   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:33.436703   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:33.448849   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:33.448862   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:33.453415   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:33.453423   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:33.474946   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:33.474958   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:33.499891   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:33.499903   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:33.533720   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:33.533730   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:33.548701   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:33.548712   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:33.559974   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:33.559986   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:33.571141   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:33.571153   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:33.605068   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:33.605082   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:33.620137   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:33.620149   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:33.633862   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:33.633874   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:36.147933   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:41.150660   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:41.151129   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:41.191536   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:41.191679   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:41.213415   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:41.213514   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:41.229021   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:41.229101   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:41.241497   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:41.241566   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:41.252282   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:41.252350   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:41.262723   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:41.262790   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:41.273755   19545 logs.go:276] 0 containers: []
	W0819 11:55:41.273767   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:41.273823   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:41.288643   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:41.288659   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:41.288664   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:41.305203   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:41.305216   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:41.320629   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:41.320642   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:41.332630   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:41.332640   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:41.366798   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:41.366806   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:41.381226   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:41.381237   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:41.393103   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:41.393112   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:41.404880   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:41.404891   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:41.428568   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:41.428577   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:41.432759   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:41.432774   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:41.467571   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:41.467583   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:41.478802   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:41.478816   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:41.493441   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:41.493450   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:41.507333   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:41.507346   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:41.518874   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:41.518886   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:44.042016   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:49.043445   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:49.043538   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:49.055587   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:49.055642   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:49.067253   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:49.067335   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:49.079049   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:49.079117   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:49.091140   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:49.091205   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:49.102112   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:49.102198   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:49.113910   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:49.113977   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:49.125042   19545 logs.go:276] 0 containers: []
	W0819 11:55:49.125053   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:49.125106   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:49.137563   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:49.137578   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:49.137584   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:49.173571   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:49.173587   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:49.211510   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:49.211522   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:49.225388   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:49.225400   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:49.230621   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:49.230632   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:49.245318   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:49.245328   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:49.258319   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:49.258332   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:49.274545   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:49.274557   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:49.293185   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:49.293197   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:49.306977   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:49.306987   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:49.332001   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:49.332020   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:49.345585   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:49.345596   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:49.363896   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:49.363906   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:49.376095   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:49.376107   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:49.389509   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:49.389520   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:51.906850   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:55:56.907860   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:55:56.908195   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:55:56.938985   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:55:56.939109   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:55:56.961604   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:55:56.961682   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:55:56.975350   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:55:56.975411   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:55:56.986725   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:55:56.986787   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:55:56.998880   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:55:56.998949   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:55:57.009794   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:55:57.009859   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:55:57.020451   19545 logs.go:276] 0 containers: []
	W0819 11:55:57.020462   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:55:57.020516   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:55:57.030908   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:55:57.030925   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:55:57.030930   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:55:57.067382   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:55:57.067396   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:55:57.081621   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:55:57.081630   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:55:57.093111   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:55:57.093125   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:55:57.117251   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:55:57.117272   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:55:57.140567   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:55:57.140580   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:55:57.145377   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:55:57.145385   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:55:57.159373   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:55:57.159385   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:55:57.171028   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:55:57.171040   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:55:57.189009   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:55:57.189020   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:55:57.200370   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:55:57.200384   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:55:57.215375   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:55:57.215387   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:55:57.247579   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:55:57.247586   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:55:57.259190   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:55:57.259204   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:55:57.273100   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:55:57.273114   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:55:59.786891   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:56:04.789234   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:56:04.789513   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:56:04.829840   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:56:04.829955   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:56:04.845820   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:56:04.845902   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:56:04.858974   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:56:04.859045   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:56:04.869570   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:56:04.869633   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:56:04.880344   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:56:04.880408   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:56:04.890992   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:56:04.891057   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:56:04.901326   19545 logs.go:276] 0 containers: []
	W0819 11:56:04.901336   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:56:04.901390   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:56:04.912054   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:56:04.912072   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:56:04.912077   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:56:04.924219   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:56:04.924231   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:56:04.938844   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:56:04.938853   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:56:04.972342   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:56:04.972353   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:56:04.985074   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:56:04.985083   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:56:04.996828   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:56:04.996840   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:56:05.008980   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:56:05.008995   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:56:05.023666   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:56:05.023676   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:56:05.035087   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:56:05.035097   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:56:05.050871   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:56:05.050883   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:56:05.067821   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:56:05.067832   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:56:05.091960   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:56:05.091970   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:56:05.126534   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:56:05.126544   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:56:05.130873   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:56:05.130878   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:56:05.150846   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:56:05.150855   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:56:07.664577   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:56:12.667283   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:56:12.667469   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0819 11:56:12.690991   19545 logs.go:276] 1 containers: [3e471984022f]
	I0819 11:56:12.691110   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0819 11:56:12.706125   19545 logs.go:276] 1 containers: [0d32f5d90489]
	I0819 11:56:12.706193   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0819 11:56:12.719630   19545 logs.go:276] 4 containers: [109880862a1d 2b7a14e32ff2 079939a9b1a7 5ec50c57271c]
	I0819 11:56:12.719702   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0819 11:56:12.730237   19545 logs.go:276] 1 containers: [c3e3692e0c80]
	I0819 11:56:12.730303   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0819 11:56:12.740650   19545 logs.go:276] 1 containers: [ba925a3f77d6]
	I0819 11:56:12.740712   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0819 11:56:12.751110   19545 logs.go:276] 1 containers: [63a4476c2ace]
	I0819 11:56:12.751171   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0819 11:56:12.761072   19545 logs.go:276] 0 containers: []
	W0819 11:56:12.761083   19545 logs.go:278] No container was found matching "kindnet"
	I0819 11:56:12.761133   19545 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0819 11:56:12.771796   19545 logs.go:276] 1 containers: [e6303d5f6b5f]
	I0819 11:56:12.771813   19545 logs.go:123] Gathering logs for coredns [079939a9b1a7] ...
	I0819 11:56:12.771818   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079939a9b1a7"
	I0819 11:56:12.787654   19545 logs.go:123] Gathering logs for container status ...
	I0819 11:56:12.787664   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 11:56:12.799164   19545 logs.go:123] Gathering logs for kubelet ...
	I0819 11:56:12.799173   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 11:56:12.833187   19545 logs.go:123] Gathering logs for describe nodes ...
	I0819 11:56:12.833196   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 11:56:12.871739   19545 logs.go:123] Gathering logs for coredns [5ec50c57271c] ...
	I0819 11:56:12.871748   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec50c57271c"
	I0819 11:56:12.887408   19545 logs.go:123] Gathering logs for coredns [2b7a14e32ff2] ...
	I0819 11:56:12.887416   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7a14e32ff2"
	I0819 11:56:12.898964   19545 logs.go:123] Gathering logs for kube-scheduler [c3e3692e0c80] ...
	I0819 11:56:12.898971   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e3692e0c80"
	I0819 11:56:12.919105   19545 logs.go:123] Gathering logs for kube-proxy [ba925a3f77d6] ...
	I0819 11:56:12.919118   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba925a3f77d6"
	I0819 11:56:12.931496   19545 logs.go:123] Gathering logs for kube-controller-manager [63a4476c2ace] ...
	I0819 11:56:12.931508   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63a4476c2ace"
	I0819 11:56:12.949916   19545 logs.go:123] Gathering logs for Docker ...
	I0819 11:56:12.949931   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0819 11:56:12.975420   19545 logs.go:123] Gathering logs for dmesg ...
	I0819 11:56:12.975436   19545 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 11:56:12.981648   19545 logs.go:123] Gathering logs for coredns [109880862a1d] ...
	I0819 11:56:12.981664   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 109880862a1d"
	I0819 11:56:12.995724   19545 logs.go:123] Gathering logs for storage-provisioner [e6303d5f6b5f] ...
	I0819 11:56:12.995735   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6303d5f6b5f"
	I0819 11:56:13.008467   19545 logs.go:123] Gathering logs for kube-apiserver [3e471984022f] ...
	I0819 11:56:13.008478   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e471984022f"
	I0819 11:56:13.024057   19545 logs.go:123] Gathering logs for etcd [0d32f5d90489] ...
	I0819 11:56:13.024069   19545 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d32f5d90489"
	I0819 11:56:15.542425   19545 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0819 11:56:20.544682   19545 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0819 11:56:20.549856   19545 out.go:201] 
	W0819 11:56:20.552829   19545 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0819 11:56:20.552855   19545 out.go:270] * 
	* 
	W0819 11:56:20.555669   19545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:20.567662   19545 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-604000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.50s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-993000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-993000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.936765625s)

                                                
                                                
-- stdout --
	* [pause-993000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-993000" primary control-plane node in "pause-993000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-993000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-993000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-993000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-993000 -n pause-993000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-993000 -n pause-993000: exit status 7 (61.507125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-993000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-441000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-441000 --driver=qemu2 : exit status 80 (9.820699583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-441000" primary control-plane node in "NoKubernetes-441000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-441000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-441000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000: exit status 7 (33.333625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-441000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247854166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-441000
	* Restarting existing qemu2 VM for "NoKubernetes-441000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-441000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-441000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000: exit status 7 (60.569875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-441000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --driver=qemu2 : exit status 80 (5.242531042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-441000
	* Restarting existing qemu2 VM for "NoKubernetes-441000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-441000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-441000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000: exit status 7 (64.579625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-441000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-441000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-441000 --driver=qemu2 : exit status 80 (5.270462s)

                                                
                                                
-- stdout --
	* [NoKubernetes-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-441000
	* Restarting existing qemu2 VM for "NoKubernetes-441000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-441000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-441000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-441000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-441000 -n NoKubernetes-441000: exit status 7 (50.029333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-441000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.844093584s)

                                                
                                                
-- stdout --
	* [auto-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-773000" primary control-plane node in "auto-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:54:50.770640   19758 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:54:50.770764   19758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:54:50.770767   19758 out.go:358] Setting ErrFile to fd 2...
	I0819 11:54:50.770770   19758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:54:50.770887   19758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:54:50.771988   19758 out.go:352] Setting JSON to false
	I0819 11:54:50.788532   19758 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8657,"bootTime":1724085033,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:54:50.788603   19758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:54:50.794829   19758 out.go:177] * [auto-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:54:50.802756   19758 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:54:50.802793   19758 notify.go:220] Checking for updates...
	I0819 11:54:50.808739   19758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:54:50.811760   19758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:54:50.814736   19758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:54:50.817733   19758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:54:50.820747   19758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:54:50.822502   19758 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:54:50.822575   19758 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:54:50.822624   19758 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:54:50.826740   19758 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:54:50.833595   19758 start.go:297] selected driver: qemu2
	I0819 11:54:50.833602   19758 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:54:50.833609   19758 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:54:50.835888   19758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:54:50.838668   19758 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:54:50.841824   19758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:54:50.841843   19758 cni.go:84] Creating CNI manager for ""
	I0819 11:54:50.841851   19758 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:54:50.841855   19758 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:54:50.841890   19758 start.go:340] cluster config:
	{Name:auto-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:54:50.845727   19758 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:54:50.853670   19758 out.go:177] * Starting "auto-773000" primary control-plane node in "auto-773000" cluster
	I0819 11:54:50.857727   19758 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:54:50.857748   19758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:54:50.857757   19758 cache.go:56] Caching tarball of preloaded images
	I0819 11:54:50.857811   19758 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:54:50.857817   19758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:54:50.857879   19758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/auto-773000/config.json ...
	I0819 11:54:50.857889   19758 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/auto-773000/config.json: {Name:mkd2ec7e65b063bcfec883d0af9a938ded15a4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:54:50.858242   19758 start.go:360] acquireMachinesLock for auto-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:54:50.858276   19758 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "auto-773000"
	I0819 11:54:50.858288   19758 start.go:93] Provisioning new machine with config: &{Name:auto-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:54:50.858316   19758 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:54:50.862735   19758 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:54:50.880754   19758 start.go:159] libmachine.API.Create for "auto-773000" (driver="qemu2")
	I0819 11:54:50.880778   19758 client.go:168] LocalClient.Create starting
	I0819 11:54:50.880833   19758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:54:50.880862   19758 main.go:141] libmachine: Decoding PEM data...
	I0819 11:54:50.880875   19758 main.go:141] libmachine: Parsing certificate...
	I0819 11:54:50.880912   19758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:54:50.880936   19758 main.go:141] libmachine: Decoding PEM data...
	I0819 11:54:50.880948   19758 main.go:141] libmachine: Parsing certificate...
	I0819 11:54:50.881270   19758 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:54:51.030389   19758 main.go:141] libmachine: Creating SSH key...
	I0819 11:54:51.163681   19758 main.go:141] libmachine: Creating Disk image...
	I0819 11:54:51.163687   19758 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:54:51.163859   19758 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2
	I0819 11:54:51.173464   19758 main.go:141] libmachine: STDOUT: 
	I0819 11:54:51.173486   19758 main.go:141] libmachine: STDERR: 
	I0819 11:54:51.173542   19758 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2 +20000M
	I0819 11:54:51.181790   19758 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:54:51.181805   19758 main.go:141] libmachine: STDERR: 
	I0819 11:54:51.181824   19758 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2
	I0819 11:54:51.181830   19758 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:54:51.181844   19758 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:54:51.181872   19758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:64:2d:f8:86:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2
	I0819 11:54:51.183529   19758 main.go:141] libmachine: STDOUT: 
	I0819 11:54:51.183547   19758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:54:51.183563   19758 client.go:171] duration metric: took 302.787417ms to LocalClient.Create
	I0819 11:54:53.185724   19758 start.go:128] duration metric: took 2.327427042s to createHost
	I0819 11:54:53.185798   19758 start.go:83] releasing machines lock for "auto-773000", held for 2.32756225s
	W0819 11:54:53.185864   19758 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:54:53.193311   19758 out.go:177] * Deleting "auto-773000" in qemu2 ...
	W0819 11:54:53.225849   19758 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:54:53.225877   19758 start.go:729] Will try again in 5 seconds ...
	I0819 11:54:58.228105   19758 start.go:360] acquireMachinesLock for auto-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:54:58.228663   19758 start.go:364] duration metric: took 435.541µs to acquireMachinesLock for "auto-773000"
	I0819 11:54:58.228744   19758 start.go:93] Provisioning new machine with config: &{Name:auto-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:54:58.229048   19758 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:54:58.238726   19758 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:54:58.287529   19758 start.go:159] libmachine.API.Create for "auto-773000" (driver="qemu2")
	I0819 11:54:58.287574   19758 client.go:168] LocalClient.Create starting
	I0819 11:54:58.287699   19758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:54:58.287775   19758 main.go:141] libmachine: Decoding PEM data...
	I0819 11:54:58.287793   19758 main.go:141] libmachine: Parsing certificate...
	I0819 11:54:58.287862   19758 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:54:58.287911   19758 main.go:141] libmachine: Decoding PEM data...
	I0819 11:54:58.287924   19758 main.go:141] libmachine: Parsing certificate...
	I0819 11:54:58.288467   19758 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:54:58.448184   19758 main.go:141] libmachine: Creating SSH key...
	I0819 11:54:58.520332   19758 main.go:141] libmachine: Creating Disk image...
	I0819 11:54:58.520341   19758 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:54:58.520528   19758 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2
	I0819 11:54:58.530060   19758 main.go:141] libmachine: STDOUT: 
	I0819 11:54:58.530082   19758 main.go:141] libmachine: STDERR: 
	I0819 11:54:58.530133   19758 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2 +20000M
	I0819 11:54:58.538202   19758 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:54:58.538218   19758 main.go:141] libmachine: STDERR: 
	I0819 11:54:58.538227   19758 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2
	I0819 11:54:58.538232   19758 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:54:58.538242   19758 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:54:58.538267   19758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:f8:d2:d3:c0:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/auto-773000/disk.qcow2
	I0819 11:54:58.539864   19758 main.go:141] libmachine: STDOUT: 
	I0819 11:54:58.539881   19758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:54:58.539896   19758 client.go:171] duration metric: took 252.322708ms to LocalClient.Create
	I0819 11:55:00.542072   19758 start.go:128] duration metric: took 2.313023583s to createHost
	I0819 11:55:00.542145   19758 start.go:83] releasing machines lock for "auto-773000", held for 2.313506792s
	W0819 11:55:00.542656   19758 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:00.553353   19758 out.go:201] 
	W0819 11:55:00.560559   19758 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:55:00.560637   19758 out.go:270] * 
	* 
	W0819 11:55:00.562721   19758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:55:00.573251   19758 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.81454875s)

                                                
                                                
-- stdout --
	* [custom-flannel-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-773000" primary control-plane node in "custom-flannel-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:55:02.751823   19871 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:55:02.751975   19871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:02.751980   19871 out.go:358] Setting ErrFile to fd 2...
	I0819 11:55:02.751982   19871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:02.752119   19871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:55:02.753225   19871 out.go:352] Setting JSON to false
	I0819 11:55:02.769618   19871 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8669,"bootTime":1724085033,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:55:02.769700   19871 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:55:02.777611   19871 out.go:177] * [custom-flannel-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:55:02.785601   19871 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:55:02.785668   19871 notify.go:220] Checking for updates...
	I0819 11:55:02.792529   19871 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:55:02.795547   19871 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:55:02.798612   19871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:55:02.801564   19871 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:55:02.804507   19871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:55:02.807936   19871 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:55:02.807999   19871 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:55:02.808047   19871 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:55:02.812628   19871 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:55:02.819587   19871 start.go:297] selected driver: qemu2
	I0819 11:55:02.819594   19871 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:55:02.819606   19871 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:55:02.821703   19871 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:55:02.824613   19871 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:55:02.827582   19871 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:55:02.827598   19871 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0819 11:55:02.827605   19871 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0819 11:55:02.827640   19871 start.go:340] cluster config:
	{Name:custom-flannel-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:55:02.831220   19871 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:55:02.839567   19871 out.go:177] * Starting "custom-flannel-773000" primary control-plane node in "custom-flannel-773000" cluster
	I0819 11:55:02.843553   19871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:55:02.843571   19871 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:55:02.843581   19871 cache.go:56] Caching tarball of preloaded images
	I0819 11:55:02.843649   19871 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:55:02.843655   19871 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:55:02.843754   19871 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/custom-flannel-773000/config.json ...
	I0819 11:55:02.843765   19871 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/custom-flannel-773000/config.json: {Name:mkefe7c60baa41d1be6f2c88c42a069900d51f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:55:02.844098   19871 start.go:360] acquireMachinesLock for custom-flannel-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:02.844133   19871 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "custom-flannel-773000"
	I0819 11:55:02.844144   19871 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:02.844176   19871 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:02.848553   19871 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:02.864584   19871 start.go:159] libmachine.API.Create for "custom-flannel-773000" (driver="qemu2")
	I0819 11:55:02.864609   19871 client.go:168] LocalClient.Create starting
	I0819 11:55:02.864667   19871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:02.864698   19871 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:02.864708   19871 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:02.864746   19871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:02.864768   19871 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:02.864774   19871 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:02.865103   19871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:03.032228   19871 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:03.155204   19871 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:03.155216   19871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:03.155417   19871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2
	I0819 11:55:03.164990   19871 main.go:141] libmachine: STDOUT: 
	I0819 11:55:03.165011   19871 main.go:141] libmachine: STDERR: 
	I0819 11:55:03.165066   19871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2 +20000M
	I0819 11:55:03.173129   19871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:03.173145   19871 main.go:141] libmachine: STDERR: 
	I0819 11:55:03.173166   19871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2
	I0819 11:55:03.173174   19871 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:03.173187   19871 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:03.173215   19871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:da:90:c2:df:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2
	I0819 11:55:03.174848   19871 main.go:141] libmachine: STDOUT: 
	I0819 11:55:03.174865   19871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:03.174885   19871 client.go:171] duration metric: took 310.278417ms to LocalClient.Create
	I0819 11:55:05.177065   19871 start.go:128] duration metric: took 2.332918542s to createHost
	I0819 11:55:05.177151   19871 start.go:83] releasing machines lock for "custom-flannel-773000", held for 2.3330475s
	W0819 11:55:05.177308   19871 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:05.184642   19871 out.go:177] * Deleting "custom-flannel-773000" in qemu2 ...
	W0819 11:55:05.220035   19871 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:05.220063   19871 start.go:729] Will try again in 5 seconds ...
	I0819 11:55:10.222094   19871 start.go:360] acquireMachinesLock for custom-flannel-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:10.222395   19871 start.go:364] duration metric: took 253.208µs to acquireMachinesLock for "custom-flannel-773000"
	I0819 11:55:10.222430   19871 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:10.222567   19871 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:10.231888   19871 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:10.268188   19871 start.go:159] libmachine.API.Create for "custom-flannel-773000" (driver="qemu2")
	I0819 11:55:10.268233   19871 client.go:168] LocalClient.Create starting
	I0819 11:55:10.268337   19871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:10.268395   19871 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:10.268408   19871 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:10.268470   19871 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:10.268509   19871 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:10.268517   19871 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:10.269091   19871 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:10.425519   19871 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:10.479165   19871 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:10.479173   19871 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:10.479366   19871 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2
	I0819 11:55:10.488682   19871 main.go:141] libmachine: STDOUT: 
	I0819 11:55:10.488697   19871 main.go:141] libmachine: STDERR: 
	I0819 11:55:10.488739   19871 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2 +20000M
	I0819 11:55:10.496917   19871 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:10.496938   19871 main.go:141] libmachine: STDERR: 
	I0819 11:55:10.496966   19871 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2
	I0819 11:55:10.496971   19871 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:10.496981   19871 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:10.497009   19871 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2d:01:f9:c7:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/custom-flannel-773000/disk.qcow2
	I0819 11:55:10.498869   19871 main.go:141] libmachine: STDOUT: 
	I0819 11:55:10.498893   19871 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:10.498905   19871 client.go:171] duration metric: took 230.6725ms to LocalClient.Create
	I0819 11:55:12.501054   19871 start.go:128] duration metric: took 2.27850375s to createHost
	I0819 11:55:12.501132   19871 start.go:83] releasing machines lock for "custom-flannel-773000", held for 2.278771542s
	W0819 11:55:12.501417   19871 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:12.509091   19871 out.go:201] 
	W0819 11:55:12.512127   19871 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:55:12.512150   19871 out.go:270] * 
	* 
	W0819 11:55:12.513863   19871 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:55:12.525933   19871 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.788769375s)

                                                
                                                
-- stdout --
	* [false-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-773000" primary control-plane node in "false-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:55:14.854227   19989 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:55:14.854349   19989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:14.854354   19989 out.go:358] Setting ErrFile to fd 2...
	I0819 11:55:14.854356   19989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:14.854480   19989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:55:14.855524   19989 out.go:352] Setting JSON to false
	I0819 11:55:14.871914   19989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8681,"bootTime":1724085033,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:55:14.872011   19989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:55:14.876661   19989 out.go:177] * [false-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:55:14.880723   19989 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:55:14.880780   19989 notify.go:220] Checking for updates...
	I0819 11:55:14.888610   19989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:55:14.891722   19989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:55:14.894720   19989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:55:14.897660   19989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:55:14.900766   19989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:55:14.904067   19989 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:55:14.904130   19989 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:55:14.904174   19989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:55:14.908695   19989 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:55:14.915676   19989 start.go:297] selected driver: qemu2
	I0819 11:55:14.915682   19989 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:55:14.915688   19989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:55:14.917953   19989 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:55:14.920581   19989 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:55:14.923739   19989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:55:14.923760   19989 cni.go:84] Creating CNI manager for "false"
	I0819 11:55:14.923798   19989 start.go:340] cluster config:
	{Name:false-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:55:14.927170   19989 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:55:14.935635   19989 out.go:177] * Starting "false-773000" primary control-plane node in "false-773000" cluster
	I0819 11:55:14.939652   19989 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:55:14.939664   19989 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:55:14.939670   19989 cache.go:56] Caching tarball of preloaded images
	I0819 11:55:14.939717   19989 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:55:14.939721   19989 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:55:14.939779   19989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/false-773000/config.json ...
	I0819 11:55:14.939789   19989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/false-773000/config.json: {Name:mk3f87fd73ffae56df5a5e6b695d3715d15ab5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:55:14.940028   19989 start.go:360] acquireMachinesLock for false-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:14.940058   19989 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "false-773000"
	I0819 11:55:14.940068   19989 start.go:93] Provisioning new machine with config: &{Name:false-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:14.940101   19989 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:14.944685   19989 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:14.960550   19989 start.go:159] libmachine.API.Create for "false-773000" (driver="qemu2")
	I0819 11:55:14.960567   19989 client.go:168] LocalClient.Create starting
	I0819 11:55:14.960628   19989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:14.960658   19989 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:14.960667   19989 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:14.960701   19989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:14.960723   19989 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:14.960734   19989 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:14.961070   19989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:15.108018   19989 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:15.211797   19989 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:15.211804   19989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:15.211981   19989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2
	I0819 11:55:15.221095   19989 main.go:141] libmachine: STDOUT: 
	I0819 11:55:15.221115   19989 main.go:141] libmachine: STDERR: 
	I0819 11:55:15.221179   19989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2 +20000M
	I0819 11:55:15.229194   19989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:15.229217   19989 main.go:141] libmachine: STDERR: 
	I0819 11:55:15.229229   19989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2
	I0819 11:55:15.229235   19989 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:15.229244   19989 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:15.229276   19989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:13:77:5c:e1:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2
	I0819 11:55:15.230878   19989 main.go:141] libmachine: STDOUT: 
	I0819 11:55:15.230901   19989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:15.230918   19989 client.go:171] duration metric: took 270.352916ms to LocalClient.Create
	I0819 11:55:17.233090   19989 start.go:128] duration metric: took 2.293008334s to createHost
	I0819 11:55:17.233199   19989 start.go:83] releasing machines lock for "false-773000", held for 2.293180959s
	W0819 11:55:17.233271   19989 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:17.245649   19989 out.go:177] * Deleting "false-773000" in qemu2 ...
	W0819 11:55:17.274868   19989 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:17.274896   19989 start.go:729] Will try again in 5 seconds ...
	I0819 11:55:22.277052   19989 start.go:360] acquireMachinesLock for false-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:22.277622   19989 start.go:364] duration metric: took 447µs to acquireMachinesLock for "false-773000"
	I0819 11:55:22.277711   19989 start.go:93] Provisioning new machine with config: &{Name:false-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:22.278006   19989 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:22.286528   19989 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:22.334075   19989 start.go:159] libmachine.API.Create for "false-773000" (driver="qemu2")
	I0819 11:55:22.334125   19989 client.go:168] LocalClient.Create starting
	I0819 11:55:22.334247   19989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:22.334313   19989 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:22.334330   19989 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:22.334405   19989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:22.334450   19989 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:22.334466   19989 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:22.335020   19989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:22.494512   19989 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:22.551517   19989 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:22.551527   19989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:22.551711   19989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2
	I0819 11:55:22.561330   19989 main.go:141] libmachine: STDOUT: 
	I0819 11:55:22.561349   19989 main.go:141] libmachine: STDERR: 
	I0819 11:55:22.561405   19989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2 +20000M
	I0819 11:55:22.569564   19989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:22.569582   19989 main.go:141] libmachine: STDERR: 
	I0819 11:55:22.569593   19989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2
	I0819 11:55:22.569596   19989 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:22.569607   19989 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:22.569635   19989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:1b:4c:b5:00:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/false-773000/disk.qcow2
	I0819 11:55:22.571279   19989 main.go:141] libmachine: STDOUT: 
	I0819 11:55:22.571296   19989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:22.571308   19989 client.go:171] duration metric: took 237.183167ms to LocalClient.Create
	I0819 11:55:24.573458   19989 start.go:128] duration metric: took 2.295461375s to createHost
	I0819 11:55:24.573622   19989 start.go:83] releasing machines lock for "false-773000", held for 2.296003583s
	W0819 11:55:24.573864   19989 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:24.585549   19989 out.go:201] 
	W0819 11:55:24.589601   19989 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:55:24.589632   19989 out.go:270] * 
	* 
	W0819 11:55:24.591533   19989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:55:24.600489   19989 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.771312417s)

                                                
                                                
-- stdout --
	* [calico-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-773000" primary control-plane node in "calico-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:55:26.820313   20100 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:55:26.820432   20100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:26.820436   20100 out.go:358] Setting ErrFile to fd 2...
	I0819 11:55:26.820438   20100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:26.820581   20100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:55:26.821922   20100 out.go:352] Setting JSON to false
	I0819 11:55:26.838749   20100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8693,"bootTime":1724085033,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:55:26.838824   20100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:55:26.845291   20100 out.go:177] * [calico-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:55:26.853249   20100 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:55:26.853293   20100 notify.go:220] Checking for updates...
	I0819 11:55:26.860248   20100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:55:26.863295   20100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:55:26.866302   20100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:55:26.869290   20100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:55:26.872229   20100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:55:26.875641   20100 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:55:26.875720   20100 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:55:26.875763   20100 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:55:26.879280   20100 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:55:26.886270   20100 start.go:297] selected driver: qemu2
	I0819 11:55:26.886276   20100 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:55:26.886282   20100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:55:26.888322   20100 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:55:26.892221   20100 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:55:26.895285   20100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:55:26.895315   20100 cni.go:84] Creating CNI manager for "calico"
	I0819 11:55:26.895319   20100 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0819 11:55:26.895346   20100 start.go:340] cluster config:
	{Name:calico-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:55:26.898615   20100 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:55:26.907284   20100 out.go:177] * Starting "calico-773000" primary control-plane node in "calico-773000" cluster
	I0819 11:55:26.911204   20100 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:55:26.911215   20100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:55:26.911223   20100 cache.go:56] Caching tarball of preloaded images
	I0819 11:55:26.911275   20100 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:55:26.911280   20100 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:55:26.911335   20100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/calico-773000/config.json ...
	I0819 11:55:26.911344   20100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/calico-773000/config.json: {Name:mk6f3574d95633f968de7a39a5d1104849b86b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:55:26.911686   20100 start.go:360] acquireMachinesLock for calico-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:26.911715   20100 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "calico-773000"
	I0819 11:55:26.911726   20100 start.go:93] Provisioning new machine with config: &{Name:calico-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:26.911751   20100 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:26.916349   20100 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:26.931442   20100 start.go:159] libmachine.API.Create for "calico-773000" (driver="qemu2")
	I0819 11:55:26.931467   20100 client.go:168] LocalClient.Create starting
	I0819 11:55:26.931526   20100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:26.931557   20100 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:26.931565   20100 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:26.931606   20100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:26.931629   20100 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:26.931638   20100 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:26.932029   20100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:27.081011   20100 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:27.181284   20100 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:27.181292   20100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:27.181496   20100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2
	I0819 11:55:27.191036   20100 main.go:141] libmachine: STDOUT: 
	I0819 11:55:27.191058   20100 main.go:141] libmachine: STDERR: 
	I0819 11:55:27.191122   20100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2 +20000M
	I0819 11:55:27.199104   20100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:27.199125   20100 main.go:141] libmachine: STDERR: 
	I0819 11:55:27.199149   20100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2
	I0819 11:55:27.199162   20100 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:27.199174   20100 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:27.199199   20100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:65:f1:f1:a4:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2
	I0819 11:55:27.200859   20100 main.go:141] libmachine: STDOUT: 
	I0819 11:55:27.200880   20100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:27.200900   20100 client.go:171] duration metric: took 269.433583ms to LocalClient.Create
	I0819 11:55:29.203065   20100 start.go:128] duration metric: took 2.291330458s to createHost
	I0819 11:55:29.203116   20100 start.go:83] releasing machines lock for "calico-773000", held for 2.291440458s
	W0819 11:55:29.203185   20100 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:29.209454   20100 out.go:177] * Deleting "calico-773000" in qemu2 ...
	W0819 11:55:29.238086   20100 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:29.238110   20100 start.go:729] Will try again in 5 seconds ...
	I0819 11:55:34.240311   20100 start.go:360] acquireMachinesLock for calico-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:34.240815   20100 start.go:364] duration metric: took 414.791µs to acquireMachinesLock for "calico-773000"
	I0819 11:55:34.240934   20100 start.go:93] Provisioning new machine with config: &{Name:calico-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:34.241297   20100 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:34.250942   20100 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:34.299976   20100 start.go:159] libmachine.API.Create for "calico-773000" (driver="qemu2")
	I0819 11:55:34.300036   20100 client.go:168] LocalClient.Create starting
	I0819 11:55:34.300156   20100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:34.300231   20100 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:34.300247   20100 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:34.300309   20100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:34.300355   20100 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:34.300371   20100 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:34.300947   20100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:34.461215   20100 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:34.498755   20100 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:34.498763   20100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:34.498938   20100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2
	I0819 11:55:34.508627   20100 main.go:141] libmachine: STDOUT: 
	I0819 11:55:34.508669   20100 main.go:141] libmachine: STDERR: 
	I0819 11:55:34.508736   20100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2 +20000M
	I0819 11:55:34.516840   20100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:34.516860   20100 main.go:141] libmachine: STDERR: 
	I0819 11:55:34.516871   20100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2
	I0819 11:55:34.516876   20100 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:34.516886   20100 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:34.516913   20100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:b6:d6:9e:b9:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/calico-773000/disk.qcow2
	I0819 11:55:34.518613   20100 main.go:141] libmachine: STDOUT: 
	I0819 11:55:34.518632   20100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:34.518645   20100 client.go:171] duration metric: took 218.609ms to LocalClient.Create
	I0819 11:55:36.520839   20100 start.go:128] duration metric: took 2.279551375s to createHost
	I0819 11:55:36.520902   20100 start.go:83] releasing machines lock for "calico-773000", held for 2.280110167s
	W0819 11:55:36.521194   20100 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:36.534642   20100 out.go:201] 
	W0819 11:55:36.537757   20100 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:55:36.537781   20100 out.go:270] * 
	* 
	W0819 11:55:36.539180   20100 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:55:36.549662   20100 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.861958375s)

                                                
                                                
-- stdout --
	* [kindnet-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-773000" primary control-plane node in "kindnet-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:55:38.959332   20217 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:55:38.959453   20217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:38.959456   20217 out.go:358] Setting ErrFile to fd 2...
	I0819 11:55:38.959459   20217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:38.959574   20217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:55:38.960592   20217 out.go:352] Setting JSON to false
	I0819 11:55:38.977094   20217 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8705,"bootTime":1724085033,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:55:38.977160   20217 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:55:38.984931   20217 out.go:177] * [kindnet-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:55:38.993781   20217 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:55:38.993821   20217 notify.go:220] Checking for updates...
	I0819 11:55:39.000648   20217 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:55:39.003694   20217 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:55:39.007697   20217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:55:39.010756   20217 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:55:39.013638   20217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:55:39.017102   20217 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:55:39.017166   20217 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:55:39.017227   20217 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:55:39.021645   20217 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:55:39.028713   20217 start.go:297] selected driver: qemu2
	I0819 11:55:39.028719   20217 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:55:39.028726   20217 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:55:39.030810   20217 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:55:39.033632   20217 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:55:39.036684   20217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:55:39.036699   20217 cni.go:84] Creating CNI manager for "kindnet"
	I0819 11:55:39.036701   20217 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:55:39.036732   20217 start.go:340] cluster config:
	{Name:kindnet-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:55:39.039909   20217 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:55:39.050699   20217 out.go:177] * Starting "kindnet-773000" primary control-plane node in "kindnet-773000" cluster
	I0819 11:55:39.054708   20217 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:55:39.054726   20217 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:55:39.054736   20217 cache.go:56] Caching tarball of preloaded images
	I0819 11:55:39.054797   20217 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:55:39.054804   20217 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:55:39.054879   20217 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kindnet-773000/config.json ...
	I0819 11:55:39.054895   20217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kindnet-773000/config.json: {Name:mke882e7326223e500f14de33649a176625bce67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:55:39.055104   20217 start.go:360] acquireMachinesLock for kindnet-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:39.055135   20217 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "kindnet-773000"
	I0819 11:55:39.055146   20217 start.go:93] Provisioning new machine with config: &{Name:kindnet-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:39.055204   20217 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:39.063707   20217 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:39.079726   20217 start.go:159] libmachine.API.Create for "kindnet-773000" (driver="qemu2")
	I0819 11:55:39.079751   20217 client.go:168] LocalClient.Create starting
	I0819 11:55:39.079831   20217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:39.079866   20217 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:39.079878   20217 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:39.079920   20217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:39.079943   20217 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:39.079949   20217 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:39.080277   20217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:39.231329   20217 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:39.366550   20217 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:39.366558   20217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:39.366769   20217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2
	I0819 11:55:39.376296   20217 main.go:141] libmachine: STDOUT: 
	I0819 11:55:39.376350   20217 main.go:141] libmachine: STDERR: 
	I0819 11:55:39.376410   20217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2 +20000M
	I0819 11:55:39.384731   20217 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:39.384747   20217 main.go:141] libmachine: STDERR: 
	I0819 11:55:39.384760   20217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2
	I0819 11:55:39.384763   20217 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:39.384776   20217 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:39.384812   20217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:4d:60:30:fc:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2
	I0819 11:55:39.386509   20217 main.go:141] libmachine: STDOUT: 
	I0819 11:55:39.386553   20217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:39.386571   20217 client.go:171] duration metric: took 306.823291ms to LocalClient.Create
	I0819 11:55:41.388618   20217 start.go:128] duration metric: took 2.333457417s to createHost
	I0819 11:55:41.388640   20217 start.go:83] releasing machines lock for "kindnet-773000", held for 2.333550917s
	W0819 11:55:41.388656   20217 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:41.398191   20217 out.go:177] * Deleting "kindnet-773000" in qemu2 ...
	W0819 11:55:41.408724   20217 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:41.408733   20217 start.go:729] Will try again in 5 seconds ...
	I0819 11:55:46.410906   20217 start.go:360] acquireMachinesLock for kindnet-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:46.411370   20217 start.go:364] duration metric: took 366.291µs to acquireMachinesLock for "kindnet-773000"
	I0819 11:55:46.411495   20217 start.go:93] Provisioning new machine with config: &{Name:kindnet-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:46.411694   20217 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:46.419140   20217 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:46.458377   20217 start.go:159] libmachine.API.Create for "kindnet-773000" (driver="qemu2")
	I0819 11:55:46.458426   20217 client.go:168] LocalClient.Create starting
	I0819 11:55:46.458530   20217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:46.458594   20217 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:46.458610   20217 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:46.458665   20217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:46.458705   20217 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:46.458721   20217 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:46.459200   20217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:46.616430   20217 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:46.732564   20217 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:46.732573   20217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:46.732760   20217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2
	I0819 11:55:46.742019   20217 main.go:141] libmachine: STDOUT: 
	I0819 11:55:46.742041   20217 main.go:141] libmachine: STDERR: 
	I0819 11:55:46.742101   20217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2 +20000M
	I0819 11:55:46.750261   20217 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:46.750282   20217 main.go:141] libmachine: STDERR: 
	I0819 11:55:46.750294   20217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2
	I0819 11:55:46.750299   20217 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:46.750305   20217 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:46.750348   20217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:b9:f8:9c:67:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kindnet-773000/disk.qcow2
	I0819 11:55:46.752010   20217 main.go:141] libmachine: STDOUT: 
	I0819 11:55:46.752027   20217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:46.752039   20217 client.go:171] duration metric: took 293.6145ms to LocalClient.Create
	I0819 11:55:48.754223   20217 start.go:128] duration metric: took 2.342523791s to createHost
	I0819 11:55:48.754313   20217 start.go:83] releasing machines lock for "kindnet-773000", held for 2.342972625s
	W0819 11:55:48.754729   20217 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:48.764086   20217 out.go:201] 
	W0819 11:55:48.768332   20217 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:55:48.768355   20217 out.go:270] * 
	* 
	W0819 11:55:48.771020   20217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:55:48.780241   20217 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.739249417s)

                                                
                                                
-- stdout --
	* [flannel-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-773000" primary control-plane node in "flannel-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:55:51.116961   20333 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:55:51.117109   20333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:51.117116   20333 out.go:358] Setting ErrFile to fd 2...
	I0819 11:55:51.117120   20333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:55:51.117257   20333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:55:51.118389   20333 out.go:352] Setting JSON to false
	I0819 11:55:51.135606   20333 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8718,"bootTime":1724085033,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:55:51.135683   20333 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:55:51.142855   20333 out.go:177] * [flannel-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:55:51.149898   20333 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:55:51.149942   20333 notify.go:220] Checking for updates...
	I0819 11:55:51.158794   20333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:55:51.161805   20333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:55:51.164868   20333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:55:51.167814   20333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:55:51.170775   20333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:55:51.174215   20333 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:55:51.174283   20333 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:55:51.174327   20333 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:55:51.178665   20333 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:55:51.185804   20333 start.go:297] selected driver: qemu2
	I0819 11:55:51.185811   20333 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:55:51.185818   20333 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:55:51.188164   20333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:55:51.191753   20333 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:55:51.194927   20333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:55:51.194947   20333 cni.go:84] Creating CNI manager for "flannel"
	I0819 11:55:51.194950   20333 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0819 11:55:51.194991   20333 start.go:340] cluster config:
	{Name:flannel-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:55:51.198576   20333 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:55:51.203781   20333 out.go:177] * Starting "flannel-773000" primary control-plane node in "flannel-773000" cluster
	I0819 11:55:51.207736   20333 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:55:51.207750   20333 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:55:51.207759   20333 cache.go:56] Caching tarball of preloaded images
	I0819 11:55:51.207815   20333 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:55:51.207820   20333 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:55:51.207883   20333 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/flannel-773000/config.json ...
	I0819 11:55:51.207893   20333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/flannel-773000/config.json: {Name:mk5968ccc4469c44edcbc15b1e9663c11f8fef75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:55:51.208262   20333 start.go:360] acquireMachinesLock for flannel-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:51.208301   20333 start.go:364] duration metric: took 32.917µs to acquireMachinesLock for "flannel-773000"
	I0819 11:55:51.208315   20333 start.go:93] Provisioning new machine with config: &{Name:flannel-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:51.208348   20333 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:51.211802   20333 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:51.228321   20333 start.go:159] libmachine.API.Create for "flannel-773000" (driver="qemu2")
	I0819 11:55:51.228350   20333 client.go:168] LocalClient.Create starting
	I0819 11:55:51.228417   20333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:51.228446   20333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:51.228466   20333 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:51.228502   20333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:51.228524   20333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:51.228533   20333 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:51.228945   20333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:51.382226   20333 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:51.455798   20333 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:51.455804   20333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:51.455986   20333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2
	I0819 11:55:51.465096   20333 main.go:141] libmachine: STDOUT: 
	I0819 11:55:51.465113   20333 main.go:141] libmachine: STDERR: 
	I0819 11:55:51.465169   20333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2 +20000M
	I0819 11:55:51.473012   20333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:51.473024   20333 main.go:141] libmachine: STDERR: 
	I0819 11:55:51.473032   20333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2
	I0819 11:55:51.473037   20333 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:51.473048   20333 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:51.473072   20333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:8d:c8:95:12:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2
	I0819 11:55:51.474760   20333 main.go:141] libmachine: STDOUT: 
	I0819 11:55:51.474780   20333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:51.474798   20333 client.go:171] duration metric: took 246.448083ms to LocalClient.Create
	I0819 11:55:53.476915   20333 start.go:128] duration metric: took 2.268586417s to createHost
	I0819 11:55:53.476978   20333 start.go:83] releasing machines lock for "flannel-773000", held for 2.268719666s
	W0819 11:55:53.477004   20333 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:53.482982   20333 out.go:177] * Deleting "flannel-773000" in qemu2 ...
	W0819 11:55:53.499978   20333 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:55:53.499987   20333 start.go:729] Will try again in 5 seconds ...
	I0819 11:55:58.502272   20333 start.go:360] acquireMachinesLock for flannel-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:55:58.502649   20333 start.go:364] duration metric: took 311.417µs to acquireMachinesLock for "flannel-773000"
	I0819 11:55:58.502744   20333 start.go:93] Provisioning new machine with config: &{Name:flannel-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:55:58.502911   20333 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:55:58.515556   20333 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:55:58.555666   20333 start.go:159] libmachine.API.Create for "flannel-773000" (driver="qemu2")
	I0819 11:55:58.555720   20333 client.go:168] LocalClient.Create starting
	I0819 11:55:58.555830   20333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:55:58.555898   20333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:58.555917   20333 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:58.555983   20333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:55:58.556022   20333 main.go:141] libmachine: Decoding PEM data...
	I0819 11:55:58.556033   20333 main.go:141] libmachine: Parsing certificate...
	I0819 11:55:58.556548   20333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:55:58.713313   20333 main.go:141] libmachine: Creating SSH key...
	I0819 11:55:58.758416   20333 main.go:141] libmachine: Creating Disk image...
	I0819 11:55:58.758426   20333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:55:58.758599   20333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2
	I0819 11:55:58.767813   20333 main.go:141] libmachine: STDOUT: 
	I0819 11:55:58.767831   20333 main.go:141] libmachine: STDERR: 
	I0819 11:55:58.767891   20333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2 +20000M
	I0819 11:55:58.775713   20333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:55:58.775731   20333 main.go:141] libmachine: STDERR: 
	I0819 11:55:58.775741   20333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2
	I0819 11:55:58.775747   20333 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:55:58.775766   20333 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:55:58.775796   20333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:1a:b2:c0:4c:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/flannel-773000/disk.qcow2
	I0819 11:55:58.777512   20333 main.go:141] libmachine: STDOUT: 
	I0819 11:55:58.777527   20333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:55:58.777537   20333 client.go:171] duration metric: took 221.814416ms to LocalClient.Create
	I0819 11:56:00.779714   20333 start.go:128] duration metric: took 2.2768165s to createHost
	I0819 11:56:00.779824   20333 start.go:83] releasing machines lock for "flannel-773000", held for 2.277203458s
	W0819 11:56:00.780149   20333 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:00.793757   20333 out.go:201] 
	W0819 11:56:00.796750   20333 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:00.796771   20333 out.go:270] * 
	* 
	W0819 11:56:00.798794   20333 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:00.813713   20333 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.699809792s)

                                                
                                                
-- stdout --
	* [enable-default-cni-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-773000" primary control-plane node in "enable-default-cni-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:03.234187   20451 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:03.234546   20451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:03.234551   20451 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:03.234554   20451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:03.234732   20451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:03.236105   20451 out.go:352] Setting JSON to false
	I0819 11:56:03.253114   20451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8730,"bootTime":1724085033,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:03.253212   20451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:03.259049   20451 out.go:177] * [enable-default-cni-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:03.263114   20451 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:03.263157   20451 notify.go:220] Checking for updates...
	I0819 11:56:03.272073   20451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:03.275101   20451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:03.279105   20451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:03.282113   20451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:03.285030   20451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:03.288502   20451 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:03.288563   20451 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:56:03.288609   20451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:03.292993   20451 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:56:03.300089   20451 start.go:297] selected driver: qemu2
	I0819 11:56:03.300095   20451 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:56:03.300101   20451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:03.302478   20451 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:03.305017   20451 out.go:177] * Automatically selected the socket_vmnet network
	E0819 11:56:03.308105   20451 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0819 11:56:03.308118   20451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:03.308150   20451 cni.go:84] Creating CNI manager for "bridge"
	I0819 11:56:03.308155   20451 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:03.308186   20451 start.go:340] cluster config:
	{Name:enable-default-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:03.311770   20451 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:03.317126   20451 out.go:177] * Starting "enable-default-cni-773000" primary control-plane node in "enable-default-cni-773000" cluster
	I0819 11:56:03.321070   20451 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:56:03.321082   20451 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:56:03.321090   20451 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:03.321153   20451 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:56:03.321158   20451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:56:03.321213   20451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/enable-default-cni-773000/config.json ...
	I0819 11:56:03.321222   20451 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/enable-default-cni-773000/config.json: {Name:mk5935196348fe617261da1027d334a92db39de7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:03.321544   20451 start.go:360] acquireMachinesLock for enable-default-cni-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:03.321577   20451 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "enable-default-cni-773000"
	I0819 11:56:03.321587   20451 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:03.321611   20451 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:03.326041   20451 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:56:03.340971   20451 start.go:159] libmachine.API.Create for "enable-default-cni-773000" (driver="qemu2")
	I0819 11:56:03.341006   20451 client.go:168] LocalClient.Create starting
	I0819 11:56:03.341066   20451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:03.341096   20451 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:03.341116   20451 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:03.341153   20451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:03.341175   20451 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:03.341182   20451 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:03.341653   20451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:03.491900   20451 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:03.528608   20451 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:03.528613   20451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:03.528829   20451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2
	I0819 11:56:03.538435   20451 main.go:141] libmachine: STDOUT: 
	I0819 11:56:03.538455   20451 main.go:141] libmachine: STDERR: 
	I0819 11:56:03.538507   20451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2 +20000M
	I0819 11:56:03.546654   20451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:03.546669   20451 main.go:141] libmachine: STDERR: 
	I0819 11:56:03.546687   20451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2
	I0819 11:56:03.546696   20451 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:03.546710   20451 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:03.546735   20451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:6c:af:0c:65:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2
	I0819 11:56:03.548363   20451 main.go:141] libmachine: STDOUT: 
	I0819 11:56:03.548380   20451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:03.548397   20451 client.go:171] duration metric: took 207.391541ms to LocalClient.Create
	I0819 11:56:05.550488   20451 start.go:128] duration metric: took 2.228908667s to createHost
	I0819 11:56:05.550527   20451 start.go:83] releasing machines lock for "enable-default-cni-773000", held for 2.228990666s
	W0819 11:56:05.550583   20451 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:05.556071   20451 out.go:177] * Deleting "enable-default-cni-773000" in qemu2 ...
	W0819 11:56:05.582071   20451 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:05.582090   20451 start.go:729] Will try again in 5 seconds ...
	I0819 11:56:10.584235   20451 start.go:360] acquireMachinesLock for enable-default-cni-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:10.584827   20451 start.go:364] duration metric: took 476.375µs to acquireMachinesLock for "enable-default-cni-773000"
	I0819 11:56:10.584966   20451 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:10.585235   20451 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:10.593918   20451 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:56:10.643908   20451 start.go:159] libmachine.API.Create for "enable-default-cni-773000" (driver="qemu2")
	I0819 11:56:10.643959   20451 client.go:168] LocalClient.Create starting
	I0819 11:56:10.644076   20451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:10.644149   20451 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:10.644172   20451 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:10.644239   20451 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:10.644284   20451 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:10.644299   20451 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:10.644900   20451 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:10.803737   20451 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:10.849595   20451 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:10.849602   20451 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:10.849782   20451 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2
	I0819 11:56:10.859026   20451 main.go:141] libmachine: STDOUT: 
	I0819 11:56:10.859045   20451 main.go:141] libmachine: STDERR: 
	I0819 11:56:10.859097   20451 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2 +20000M
	I0819 11:56:10.866887   20451 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:10.866902   20451 main.go:141] libmachine: STDERR: 
	I0819 11:56:10.866918   20451 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2
	I0819 11:56:10.866923   20451 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:10.866936   20451 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:10.866972   20451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2d:f8:56:ba:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/enable-default-cni-773000/disk.qcow2
	I0819 11:56:10.868551   20451 main.go:141] libmachine: STDOUT: 
	I0819 11:56:10.868568   20451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:10.868587   20451 client.go:171] duration metric: took 224.627834ms to LocalClient.Create
	I0819 11:56:12.870265   20451 start.go:128] duration metric: took 2.285044167s to createHost
	I0819 11:56:12.870277   20451 start.go:83] releasing machines lock for "enable-default-cni-773000", held for 2.285470417s
	W0819 11:56:12.870355   20451 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:12.881672   20451 out.go:201] 
	W0819 11:56:12.885456   20451 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:12.885476   20451 out.go:270] * 
	* 
	W0819 11:56:12.885921   20451 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:12.897634   20451 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.052504958s)

                                                
                                                
-- stdout --
	* [bridge-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-773000" primary control-plane node in "bridge-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:15.083132   20562 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:15.083287   20562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:15.083290   20562 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:15.083293   20562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:15.083433   20562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:15.084533   20562 out.go:352] Setting JSON to false
	I0819 11:56:15.101060   20562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8742,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:15.101152   20562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:15.109190   20562 out.go:177] * [bridge-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:15.116122   20562 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:15.116181   20562 notify.go:220] Checking for updates...
	I0819 11:56:15.123080   20562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:15.126104   20562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:15.129085   20562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:15.132122   20562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:15.135039   20562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:15.138431   20562 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:15.138496   20562 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:56:15.138543   20562 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:15.143037   20562 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:56:15.150060   20562 start.go:297] selected driver: qemu2
	I0819 11:56:15.150066   20562 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:56:15.150072   20562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:15.152276   20562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:15.156018   20562 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:56:15.159157   20562 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:15.159183   20562 cni.go:84] Creating CNI manager for "bridge"
	I0819 11:56:15.159187   20562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:15.159221   20562 start.go:340] cluster config:
	{Name:bridge-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:15.162744   20562 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:15.170081   20562 out.go:177] * Starting "bridge-773000" primary control-plane node in "bridge-773000" cluster
	I0819 11:56:15.174046   20562 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:56:15.174063   20562 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:56:15.174075   20562 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:15.174147   20562 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:56:15.174151   20562 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:56:15.174221   20562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/bridge-773000/config.json ...
	I0819 11:56:15.174231   20562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/bridge-773000/config.json: {Name:mk130c5e67a9ab5c0f2cbdec4b20454dcc815560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:15.174476   20562 start.go:360] acquireMachinesLock for bridge-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:15.174505   20562 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "bridge-773000"
	I0819 11:56:15.174516   20562 start.go:93] Provisioning new machine with config: &{Name:bridge-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:15.174550   20562 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:15.178044   20562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:56:15.193108   20562 start.go:159] libmachine.API.Create for "bridge-773000" (driver="qemu2")
	I0819 11:56:15.193130   20562 client.go:168] LocalClient.Create starting
	I0819 11:56:15.193186   20562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:15.193216   20562 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:15.193224   20562 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:15.193263   20562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:15.193285   20562 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:15.193299   20562 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:15.193616   20562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:15.343120   20562 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:15.586374   20562 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:15.586386   20562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:15.586585   20562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2
	I0819 11:56:15.596168   20562 main.go:141] libmachine: STDOUT: 
	I0819 11:56:15.596200   20562 main.go:141] libmachine: STDERR: 
	I0819 11:56:15.596251   20562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2 +20000M
	I0819 11:56:15.604287   20562 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:15.604307   20562 main.go:141] libmachine: STDERR: 
	I0819 11:56:15.604324   20562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2
	I0819 11:56:15.604333   20562 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:15.604345   20562 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:15.604378   20562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:6d:a1:2c:e1:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2
	I0819 11:56:15.606011   20562 main.go:141] libmachine: STDOUT: 
	I0819 11:56:15.606027   20562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:15.606047   20562 client.go:171] duration metric: took 412.922375ms to LocalClient.Create
	I0819 11:56:17.608213   20562 start.go:128] duration metric: took 2.433672833s to createHost
	I0819 11:56:17.608292   20562 start.go:83] releasing machines lock for "bridge-773000", held for 2.43383s
	W0819 11:56:17.608418   20562 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:17.621597   20562 out.go:177] * Deleting "bridge-773000" in qemu2 ...
	W0819 11:56:17.654666   20562 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:17.654697   20562 start.go:729] Will try again in 5 seconds ...
	I0819 11:56:22.656890   20562 start.go:360] acquireMachinesLock for bridge-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:22.657576   20562 start.go:364] duration metric: took 557.917µs to acquireMachinesLock for "bridge-773000"
	I0819 11:56:22.657657   20562 start.go:93] Provisioning new machine with config: &{Name:bridge-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:22.657923   20562 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:22.668592   20562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:56:22.718685   20562 start.go:159] libmachine.API.Create for "bridge-773000" (driver="qemu2")
	I0819 11:56:22.718745   20562 client.go:168] LocalClient.Create starting
	I0819 11:56:22.718858   20562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:22.718936   20562 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:22.718952   20562 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:22.719013   20562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:22.719057   20562 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:22.719072   20562 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:22.719711   20562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:22.881542   20562 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:23.043603   20562 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:23.043617   20562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:23.043860   20562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2
	I0819 11:56:23.053381   20562 main.go:141] libmachine: STDOUT: 
	I0819 11:56:23.053413   20562 main.go:141] libmachine: STDERR: 
	I0819 11:56:23.053475   20562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2 +20000M
	I0819 11:56:23.061590   20562 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:23.061604   20562 main.go:141] libmachine: STDERR: 
	I0819 11:56:23.061618   20562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2
	I0819 11:56:23.061625   20562 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:23.061635   20562 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:23.061675   20562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:81:06:75:7b:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/bridge-773000/disk.qcow2
	I0819 11:56:23.063289   20562 main.go:141] libmachine: STDOUT: 
	I0819 11:56:23.063305   20562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:23.063320   20562 client.go:171] duration metric: took 344.576125ms to LocalClient.Create
	I0819 11:56:25.065463   20562 start.go:128] duration metric: took 2.407558292s to createHost
	I0819 11:56:25.065536   20562 start.go:83] releasing machines lock for "bridge-773000", held for 2.407981875s
	W0819 11:56:25.065895   20562 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:25.075649   20562 out.go:201] 
	W0819 11:56:25.081713   20562 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:25.081859   20562 out.go:270] * 
	* 
	W0819 11:56:25.083914   20562 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:25.093621   20562 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-773000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.929924s)

                                                
                                                
-- stdout --
	* [kubenet-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-773000" primary control-plane node in "kubenet-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:27.295686   20677 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:27.295809   20677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:27.295812   20677 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:27.295815   20677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:27.295949   20677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:27.297011   20677 out.go:352] Setting JSON to false
	I0819 11:56:27.314496   20677 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8754,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:27.314567   20677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:27.320582   20677 out.go:177] * [kubenet-773000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:27.327436   20677 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:27.327476   20677 notify.go:220] Checking for updates...
	I0819 11:56:27.334462   20677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:27.337422   20677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:27.341433   20677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:27.344409   20677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:27.347448   20677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:27.350879   20677 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:27.350946   20677 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:56:27.350993   20677 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:27.355419   20677 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:56:27.362440   20677 start.go:297] selected driver: qemu2
	I0819 11:56:27.362449   20677 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:56:27.362456   20677 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:27.364782   20677 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:27.368420   20677 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:56:27.371535   20677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:27.371570   20677 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0819 11:56:27.371617   20677 start.go:340] cluster config:
	{Name:kubenet-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:27.375220   20677 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:27.383385   20677 out.go:177] * Starting "kubenet-773000" primary control-plane node in "kubenet-773000" cluster
	I0819 11:56:27.387454   20677 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:56:27.387468   20677 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:56:27.387476   20677 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:27.387542   20677 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:56:27.387547   20677 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:56:27.387602   20677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kubenet-773000/config.json ...
	I0819 11:56:27.387612   20677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/kubenet-773000/config.json: {Name:mk576bd55470ffb7e2adc3f995380a47ba8dce4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:27.387824   20677 start.go:360] acquireMachinesLock for kubenet-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:27.387865   20677 start.go:364] duration metric: took 36.042µs to acquireMachinesLock for "kubenet-773000"
	I0819 11:56:27.387876   20677 start.go:93] Provisioning new machine with config: &{Name:kubenet-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:27.387908   20677 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:27.396418   20677 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:56:27.411540   20677 start.go:159] libmachine.API.Create for "kubenet-773000" (driver="qemu2")
	I0819 11:56:27.411557   20677 client.go:168] LocalClient.Create starting
	I0819 11:56:27.411616   20677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:27.411652   20677 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:27.411662   20677 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:27.411699   20677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:27.411726   20677 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:27.411734   20677 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:27.412114   20677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:27.565961   20677 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:27.671503   20677 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:27.671513   20677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:27.671742   20677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2
	I0819 11:56:27.681497   20677 main.go:141] libmachine: STDOUT: 
	I0819 11:56:27.681516   20677 main.go:141] libmachine: STDERR: 
	I0819 11:56:27.681560   20677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2 +20000M
	I0819 11:56:27.689549   20677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:27.689563   20677 main.go:141] libmachine: STDERR: 
	I0819 11:56:27.689578   20677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2
	I0819 11:56:27.689583   20677 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:27.689597   20677 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:27.689624   20677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:4c:27:c4:bc:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2
	I0819 11:56:27.691207   20677 main.go:141] libmachine: STDOUT: 
	I0819 11:56:27.691222   20677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:27.691241   20677 client.go:171] duration metric: took 279.686375ms to LocalClient.Create
	I0819 11:56:29.693324   20677 start.go:128] duration metric: took 2.305449625s to createHost
	I0819 11:56:29.693378   20677 start.go:83] releasing machines lock for "kubenet-773000", held for 2.305555333s
	W0819 11:56:29.693421   20677 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:29.700979   20677 out.go:177] * Deleting "kubenet-773000" in qemu2 ...
	W0819 11:56:29.731797   20677 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:29.731813   20677 start.go:729] Will try again in 5 seconds ...
	I0819 11:56:34.733946   20677 start.go:360] acquireMachinesLock for kubenet-773000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:34.734606   20677 start.go:364] duration metric: took 540.416µs to acquireMachinesLock for "kubenet-773000"
	I0819 11:56:34.734800   20677 start.go:93] Provisioning new machine with config: &{Name:kubenet-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:34.735175   20677 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:34.745059   20677 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0819 11:56:34.795131   20677 start.go:159] libmachine.API.Create for "kubenet-773000" (driver="qemu2")
	I0819 11:56:34.795194   20677 client.go:168] LocalClient.Create starting
	I0819 11:56:34.795315   20677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:34.795378   20677 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:34.795393   20677 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:34.795457   20677 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:34.795500   20677 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:34.795511   20677 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:34.796070   20677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:34.953544   20677 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:35.146614   20677 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:35.146626   20677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:35.146885   20677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2
	I0819 11:56:35.156649   20677 main.go:141] libmachine: STDOUT: 
	I0819 11:56:35.156670   20677 main.go:141] libmachine: STDERR: 
	I0819 11:56:35.156721   20677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2 +20000M
	I0819 11:56:35.164621   20677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:35.164637   20677 main.go:141] libmachine: STDERR: 
	I0819 11:56:35.164655   20677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2
	I0819 11:56:35.164658   20677 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:35.164668   20677 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:35.164696   20677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:71:ba:ba:c7:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/kubenet-773000/disk.qcow2
	I0819 11:56:35.166317   20677 main.go:141] libmachine: STDOUT: 
	I0819 11:56:35.166336   20677 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:35.166348   20677 client.go:171] duration metric: took 371.156958ms to LocalClient.Create
	I0819 11:56:37.166661   20677 start.go:128] duration metric: took 2.4315055s to createHost
	I0819 11:56:37.166676   20677 start.go:83] releasing machines lock for "kubenet-773000", held for 2.4320555s
	W0819 11:56:37.166757   20677 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:37.174668   20677 out.go:201] 
	W0819 11:56:37.178748   20677 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:37.178757   20677 out.go:270] * 
	* 
	W0819 11:56:37.179376   20677 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:37.188778   20677 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-374000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-374000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.887937834s)

                                                
                                                
-- stdout --
	* [old-k8s-version-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-374000" primary control-plane node in "old-k8s-version-374000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-374000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:39.459654   20794 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:39.459792   20794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:39.459800   20794 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:39.459802   20794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:39.459925   20794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:39.461146   20794 out.go:352] Setting JSON to false
	I0819 11:56:39.478589   20794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8766,"bootTime":1724085033,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:39.478649   20794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:39.485258   20794 out.go:177] * [old-k8s-version-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:39.497312   20794 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:39.497337   20794 notify.go:220] Checking for updates...
	I0819 11:56:39.504282   20794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:39.507267   20794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:39.511286   20794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:39.514323   20794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:39.517304   20794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:39.520695   20794 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:39.520768   20794 config.go:182] Loaded profile config "stopped-upgrade-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0819 11:56:39.520827   20794 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:39.525250   20794 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:56:39.532312   20794 start.go:297] selected driver: qemu2
	I0819 11:56:39.532318   20794 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:56:39.532324   20794 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:39.534620   20794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:39.537274   20794 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:56:39.540362   20794 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:39.540386   20794 cni.go:84] Creating CNI manager for ""
	I0819 11:56:39.540396   20794 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:56:39.540430   20794 start.go:340] cluster config:
	{Name:old-k8s-version-374000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:39.543891   20794 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:39.551111   20794 out.go:177] * Starting "old-k8s-version-374000" primary control-plane node in "old-k8s-version-374000" cluster
	I0819 11:56:39.555257   20794 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:56:39.555273   20794 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:56:39.555280   20794 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:39.555347   20794 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:56:39.555353   20794 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:56:39.555427   20794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/old-k8s-version-374000/config.json ...
	I0819 11:56:39.555445   20794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/old-k8s-version-374000/config.json: {Name:mk3b5c931c60a073cf0ed8aadbae1473f0a72a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:39.555659   20794 start.go:360] acquireMachinesLock for old-k8s-version-374000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:39.555689   20794 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "old-k8s-version-374000"
	I0819 11:56:39.555700   20794 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:39.555727   20794 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:39.564194   20794 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:56:39.579049   20794 start.go:159] libmachine.API.Create for "old-k8s-version-374000" (driver="qemu2")
	I0819 11:56:39.579082   20794 client.go:168] LocalClient.Create starting
	I0819 11:56:39.579155   20794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:39.579195   20794 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:39.579205   20794 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:39.579241   20794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:39.579269   20794 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:39.579280   20794 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:39.579608   20794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:39.729640   20794 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:39.815117   20794 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:39.815128   20794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:39.815325   20794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:39.824586   20794 main.go:141] libmachine: STDOUT: 
	I0819 11:56:39.824603   20794 main.go:141] libmachine: STDERR: 
	I0819 11:56:39.824647   20794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2 +20000M
	I0819 11:56:39.832473   20794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:39.832493   20794 main.go:141] libmachine: STDERR: 
	I0819 11:56:39.832508   20794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:39.832513   20794 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:39.832524   20794 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:39.832553   20794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:be:1e:28:d7:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:39.834178   20794 main.go:141] libmachine: STDOUT: 
	I0819 11:56:39.834198   20794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:39.834217   20794 client.go:171] duration metric: took 255.136583ms to LocalClient.Create
	I0819 11:56:41.836383   20794 start.go:128] duration metric: took 2.280673s to createHost
	I0819 11:56:41.836463   20794 start.go:83] releasing machines lock for "old-k8s-version-374000", held for 2.280812583s
	W0819 11:56:41.836564   20794 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:41.854014   20794 out.go:177] * Deleting "old-k8s-version-374000" in qemu2 ...
	W0819 11:56:41.882551   20794 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:41.882581   20794 start.go:729] Will try again in 5 seconds ...
	I0819 11:56:46.884424   20794 start.go:360] acquireMachinesLock for old-k8s-version-374000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:46.884968   20794 start.go:364] duration metric: took 432.917µs to acquireMachinesLock for "old-k8s-version-374000"
	I0819 11:56:46.885110   20794 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:46.885400   20794 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:46.895104   20794 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:56:46.933996   20794 start.go:159] libmachine.API.Create for "old-k8s-version-374000" (driver="qemu2")
	I0819 11:56:46.934048   20794 client.go:168] LocalClient.Create starting
	I0819 11:56:46.934161   20794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:46.934229   20794 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:46.934244   20794 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:46.934306   20794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:46.934344   20794 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:46.934360   20794 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:46.934842   20794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:47.089902   20794 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:47.260228   20794 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:47.260237   20794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:47.260503   20794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:47.270136   20794 main.go:141] libmachine: STDOUT: 
	I0819 11:56:47.270153   20794 main.go:141] libmachine: STDERR: 
	I0819 11:56:47.270215   20794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2 +20000M
	I0819 11:56:47.278311   20794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:47.278330   20794 main.go:141] libmachine: STDERR: 
	I0819 11:56:47.278350   20794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:47.278354   20794 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:47.278365   20794 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:47.278394   20794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:11:a7:d9:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:47.280061   20794 main.go:141] libmachine: STDOUT: 
	I0819 11:56:47.280076   20794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:47.280088   20794 client.go:171] duration metric: took 346.041417ms to LocalClient.Create
	I0819 11:56:49.282355   20794 start.go:128] duration metric: took 2.396806917s to createHost
	I0819 11:56:49.282418   20794 start.go:83] releasing machines lock for "old-k8s-version-374000", held for 2.397481084s
	W0819 11:56:49.282816   20794 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:49.294421   20794 out.go:201] 
	W0819 11:56:49.298410   20794 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:49.298424   20794 out.go:270] * 
	* 
	W0819 11:56:49.299913   20794 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:49.308391   20794 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-374000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (46.64325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-374000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-374000 create -f testdata/busybox.yaml: exit status 1 (29.495292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-374000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-374000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (29.543041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (29.430792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-374000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-374000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-374000 describe deploy/metrics-server -n kube-system: exit status 1 (26.616416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-374000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-374000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (30.632458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-374000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-374000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.200052958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-374000" primary control-plane node in "old-k8s-version-374000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-374000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-374000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:51.612926   20837 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:51.613087   20837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:51.613090   20837 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:51.613093   20837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:51.613221   20837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:51.614390   20837 out.go:352] Setting JSON to false
	I0819 11:56:51.631097   20837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8778,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:51.631181   20837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:51.635507   20837 out.go:177] * [old-k8s-version-374000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:51.642431   20837 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:51.642492   20837 notify.go:220] Checking for updates...
	I0819 11:56:51.649313   20837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:51.652420   20837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:51.655438   20837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:51.656853   20837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:51.660421   20837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:51.663799   20837 config.go:182] Loaded profile config "old-k8s-version-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:56:51.667422   20837 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 11:56:51.670387   20837 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:51.674372   20837 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:56:51.681398   20837 start.go:297] selected driver: qemu2
	I0819 11:56:51.681405   20837 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:51.681473   20837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:51.683753   20837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:51.683777   20837 cni.go:84] Creating CNI manager for ""
	I0819 11:56:51.683785   20837 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:56:51.683806   20837 start.go:340] cluster config:
	{Name:old-k8s-version-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-374000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:51.687147   20837 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:51.695350   20837 out.go:177] * Starting "old-k8s-version-374000" primary control-plane node in "old-k8s-version-374000" cluster
	I0819 11:56:51.699439   20837 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:56:51.699458   20837 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:56:51.699482   20837 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:51.699554   20837 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:56:51.699559   20837 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:56:51.699638   20837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/old-k8s-version-374000/config.json ...
	I0819 11:56:51.700099   20837 start.go:360] acquireMachinesLock for old-k8s-version-374000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:51.700125   20837 start.go:364] duration metric: took 20.458µs to acquireMachinesLock for "old-k8s-version-374000"
	I0819 11:56:51.700134   20837 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:56:51.700140   20837 fix.go:54] fixHost starting: 
	I0819 11:56:51.700258   20837 fix.go:112] recreateIfNeeded on old-k8s-version-374000: state=Stopped err=<nil>
	W0819 11:56:51.700266   20837 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:56:51.704349   20837 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-374000" ...
	I0819 11:56:51.712374   20837 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:51.712417   20837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:11:a7:d9:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:51.714385   20837 main.go:141] libmachine: STDOUT: 
	I0819 11:56:51.714406   20837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:51.714436   20837 fix.go:56] duration metric: took 14.296041ms for fixHost
	I0819 11:56:51.714441   20837 start.go:83] releasing machines lock for "old-k8s-version-374000", held for 14.312458ms
	W0819 11:56:51.714446   20837 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:51.714473   20837 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:51.714477   20837 start.go:729] Will try again in 5 seconds ...
	I0819 11:56:56.715424   20837 start.go:360] acquireMachinesLock for old-k8s-version-374000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:56.716007   20837 start.go:364] duration metric: took 482.666µs to acquireMachinesLock for "old-k8s-version-374000"
	I0819 11:56:56.716138   20837 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:56:56.716159   20837 fix.go:54] fixHost starting: 
	I0819 11:56:56.716918   20837 fix.go:112] recreateIfNeeded on old-k8s-version-374000: state=Stopped err=<nil>
	W0819 11:56:56.716946   20837 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:56:56.734736   20837 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-374000" ...
	I0819 11:56:56.739514   20837 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:56.739807   20837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:ff:11:a7:d9:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/old-k8s-version-374000/disk.qcow2
	I0819 11:56:56.749431   20837 main.go:141] libmachine: STDOUT: 
	I0819 11:56:56.749496   20837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:56.749576   20837 fix.go:56] duration metric: took 33.419375ms for fixHost
	I0819 11:56:56.749596   20837 start.go:83] releasing machines lock for "old-k8s-version-374000", held for 33.567125ms
	W0819 11:56:56.749811   20837 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-374000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-374000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:56.757489   20837 out.go:201] 
	W0819 11:56:56.761564   20837 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:56:56.761589   20837 out.go:270] * 
	* 
	W0819 11:56:56.764084   20837 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:56:56.776619   20837 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-374000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (66.569334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.039492167s)

                                                
                                                
-- stdout --
	* [no-preload-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-113000" primary control-plane node in "no-preload-113000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-113000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:53.670921   20848 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:53.671063   20848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:53.671066   20848 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:53.671068   20848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:53.671220   20848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:53.672286   20848 out.go:352] Setting JSON to false
	I0819 11:56:53.688345   20848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8780,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:53.688415   20848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:53.694026   20848 out.go:177] * [no-preload-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:53.701046   20848 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:53.701111   20848 notify.go:220] Checking for updates...
	I0819 11:56:53.707024   20848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:53.709977   20848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:53.711349   20848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:53.713964   20848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:53.717007   20848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:53.720287   20848 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:53.720368   20848 config.go:182] Loaded profile config "old-k8s-version-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:56:53.720418   20848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:53.724969   20848 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:56:53.732050   20848 start.go:297] selected driver: qemu2
	I0819 11:56:53.732055   20848 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:56:53.732061   20848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:53.734204   20848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:53.736935   20848 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:56:53.741191   20848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:53.741230   20848 cni.go:84] Creating CNI manager for ""
	I0819 11:56:53.741243   20848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:56:53.741251   20848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:53.741272   20848 start.go:340] cluster config:
	{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:53.744987   20848 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.752972   20848 out.go:177] * Starting "no-preload-113000" primary control-plane node in "no-preload-113000" cluster
	I0819 11:56:53.756983   20848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:56:53.757058   20848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/no-preload-113000/config.json ...
	I0819 11:56:53.757073   20848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/no-preload-113000/config.json: {Name:mk8ad4e946be44c355a3f21bb4234d70ffd3ec1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:53.757067   20848 cache.go:107] acquiring lock: {Name:mk431ccdb49bd0ebf21fd0eeca08dfa0c11b0f0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757081   20848 cache.go:107] acquiring lock: {Name:mka85e985d99431fa11726d885c2a35a838a53ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757093   20848 cache.go:107] acquiring lock: {Name:mk27f022501ea8b412d0f3e7af66381fb8ffa923 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757130   20848 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 11:56:53.757137   20848 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 74.166µs
	I0819 11:56:53.757144   20848 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 11:56:53.757155   20848 cache.go:107] acquiring lock: {Name:mk8a2f0962b822c0a92d22fc44606fc5b4f745d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757235   20848 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 11:56:53.757256   20848 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 11:56:53.757285   20848 cache.go:107] acquiring lock: {Name:mk75ad27d9b9cd6e8da3fc6a97d616a88f6afe35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757284   20848 cache.go:107] acquiring lock: {Name:mk125717f02375d31ac913dcc3e0a53bb5e144ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757321   20848 cache.go:107] acquiring lock: {Name:mk44ffc6768e11f99f88fda11cdaed25c90fd90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757379   20848 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 11:56:53.757387   20848 cache.go:107] acquiring lock: {Name:mkeac2a9475adad12e98119ffd35f4ec1144a476 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:53.757421   20848 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 11:56:53.757514   20848 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 11:56:53.757533   20848 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0819 11:56:53.757517   20848 start.go:360] acquireMachinesLock for no-preload-113000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:53.757581   20848 start.go:364] duration metric: took 36.833µs to acquireMachinesLock for "no-preload-113000"
	I0819 11:56:53.757594   20848 start.go:93] Provisioning new machine with config: &{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:53.757626   20848 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:53.757633   20848 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0819 11:56:53.764929   20848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:56:53.769326   20848 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0819 11:56:53.769335   20848 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0819 11:56:53.769419   20848 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0819 11:56:53.769455   20848 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0819 11:56:53.769498   20848 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0819 11:56:53.771238   20848 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0819 11:56:53.771235   20848 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0819 11:56:53.783798   20848 start.go:159] libmachine.API.Create for "no-preload-113000" (driver="qemu2")
	I0819 11:56:53.783829   20848 client.go:168] LocalClient.Create starting
	I0819 11:56:53.783950   20848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:53.783992   20848 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:53.784002   20848 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:53.784046   20848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:53.784069   20848 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:53.784080   20848 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:53.784472   20848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:53.953204   20848 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:54.121882   20848 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:54.121899   20848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:54.122151   20848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:56:54.131553   20848 main.go:141] libmachine: STDOUT: 
	I0819 11:56:54.131567   20848 main.go:141] libmachine: STDERR: 
	I0819 11:56:54.131610   20848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2 +20000M
	I0819 11:56:54.139832   20848 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:54.139849   20848 main.go:141] libmachine: STDERR: 
	I0819 11:56:54.139861   20848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:56:54.139866   20848 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:54.139876   20848 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:54.139901   20848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:ef:30:74:99:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:56:54.141733   20848 main.go:141] libmachine: STDOUT: 
	I0819 11:56:54.141749   20848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:54.141765   20848 client.go:171] duration metric: took 357.93925ms to LocalClient.Create
	I0819 11:56:54.159758   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0819 11:56:54.172858   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0819 11:56:54.175078   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0819 11:56:54.189045   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0819 11:56:54.221216   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0819 11:56:54.270897   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0819 11:56:54.293676   20848 cache.go:162] opening:  /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0819 11:56:54.416728   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 11:56:54.416789   20848 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 659.472416ms
	I0819 11:56:54.416816   20848 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 11:56:56.141925   20848 start.go:128] duration metric: took 2.384314s to createHost
	I0819 11:56:56.141988   20848 start.go:83] releasing machines lock for "no-preload-113000", held for 2.38444775s
	W0819 11:56:56.142070   20848 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:56.160608   20848 out.go:177] * Deleting "no-preload-113000" in qemu2 ...
	W0819 11:56:56.195347   20848 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:56.195388   20848 start.go:729] Will try again in 5 seconds ...
	I0819 11:56:58.125503   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 11:56:58.125560   20848 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.368577958s
	I0819 11:56:58.125594   20848 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 11:56:58.354136   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 11:56:58.354233   20848 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.596995667s
	I0819 11:56:58.354264   20848 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 11:56:58.382126   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 11:56:58.382195   20848 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.625037125s
	I0819 11:56:58.382219   20848 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 11:56:58.696544   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 11:56:58.696587   20848 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.939617416s
	I0819 11:56:58.696631   20848 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 11:56:58.913047   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 11:56:58.913097   20848 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 5.156044833s
	I0819 11:56:58.913123   20848 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 11:57:01.195857   20848 start.go:360] acquireMachinesLock for no-preload-113000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:01.196312   20848 start.go:364] duration metric: took 322.917µs to acquireMachinesLock for "no-preload-113000"
	I0819 11:57:01.196458   20848 start.go:93] Provisioning new machine with config: &{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:57:01.196821   20848 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:57:01.203383   20848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:57:01.252711   20848 start.go:159] libmachine.API.Create for "no-preload-113000" (driver="qemu2")
	I0819 11:57:01.252763   20848 client.go:168] LocalClient.Create starting
	I0819 11:57:01.252881   20848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:57:01.252953   20848 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:01.252971   20848 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:01.253042   20848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:57:01.253086   20848 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:01.253098   20848 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:01.253611   20848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:57:01.442249   20848 main.go:141] libmachine: Creating SSH key...
	I0819 11:57:01.614093   20848 main.go:141] libmachine: Creating Disk image...
	I0819 11:57:01.614103   20848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:57:01.614333   20848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:57:01.624172   20848 main.go:141] libmachine: STDOUT: 
	I0819 11:57:01.624194   20848 main.go:141] libmachine: STDERR: 
	I0819 11:57:01.624239   20848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2 +20000M
	I0819 11:57:01.632373   20848 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:57:01.632389   20848 main.go:141] libmachine: STDERR: 
	I0819 11:57:01.632401   20848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:57:01.632409   20848 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:57:01.632418   20848 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:01.632450   20848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:24:58:1c:82:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:57:01.634140   20848 main.go:141] libmachine: STDOUT: 
	I0819 11:57:01.634157   20848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:01.634175   20848 client.go:171] duration metric: took 381.408125ms to LocalClient.Create
	I0819 11:57:01.765999   20848 cache.go:157] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 11:57:01.766014   20848 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.008884958s
	I0819 11:57:01.766024   20848 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 11:57:01.766042   20848 cache.go:87] Successfully saved all images to host disk.
	I0819 11:57:03.636302   20848 start.go:128] duration metric: took 2.439481459s to createHost
	I0819 11:57:03.636360   20848 start.go:83] releasing machines lock for "no-preload-113000", held for 2.440074167s
	W0819 11:57:03.636695   20848 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-113000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:03.649189   20848 out.go:201] 
	W0819 11:57:03.654353   20848 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:03.654527   20848 out.go:270] * 
	* 
	W0819 11:57:03.657360   20848 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:03.666061   20848 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (64.929417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-374000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (31.926041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-374000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.971458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-374000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (29.339583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-374000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (28.569334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-374000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-374000 --alsologtostderr -v=1: exit status 83 (45.0115ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-374000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-374000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:57.040907   20899 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:57.041256   20899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:57.041260   20899 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:57.041264   20899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:57.041424   20899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:57.041634   20899 out.go:352] Setting JSON to false
	I0819 11:56:57.041642   20899 mustload.go:65] Loading cluster: old-k8s-version-374000
	I0819 11:56:57.041824   20899 config.go:182] Loaded profile config "old-k8s-version-374000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0819 11:56:57.046409   20899 out.go:177] * The control-plane node old-k8s-version-374000 host is not running: state=Stopped
	I0819 11:56:57.054383   20899 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-374000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-374000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (29.4325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (28.983167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-374000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-475000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-475000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.034897833s)

                                                
                                                
-- stdout --
	* [embed-certs-475000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-475000" primary control-plane node in "embed-certs-475000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:56:57.360396   20916 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:56:57.360518   20916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:57.360521   20916 out.go:358] Setting ErrFile to fd 2...
	I0819 11:56:57.360523   20916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:56:57.360655   20916 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:56:57.361742   20916 out.go:352] Setting JSON to false
	I0819 11:56:57.378226   20916 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8784,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:56:57.378300   20916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:56:57.383392   20916 out.go:177] * [embed-certs-475000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:56:57.392398   20916 notify.go:220] Checking for updates...
	I0819 11:56:57.396355   20916 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:56:57.403397   20916 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:56:57.410308   20916 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:56:57.418338   20916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:56:57.426321   20916 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:56:57.438385   20916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:56:57.443844   20916 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:57.443908   20916 config.go:182] Loaded profile config "no-preload-113000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:56:57.443959   20916 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:56:57.448307   20916 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:56:57.455395   20916 start.go:297] selected driver: qemu2
	I0819 11:56:57.455403   20916 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:56:57.455409   20916 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:56:57.457983   20916 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:56:57.462274   20916 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:56:57.466420   20916 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:56:57.466456   20916 cni.go:84] Creating CNI manager for ""
	I0819 11:56:57.466463   20916 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:56:57.466470   20916 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:56:57.466505   20916 start.go:340] cluster config:
	{Name:embed-certs-475000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:56:57.470595   20916 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:56:57.478332   20916 out.go:177] * Starting "embed-certs-475000" primary control-plane node in "embed-certs-475000" cluster
	I0819 11:56:57.482344   20916 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:56:57.482357   20916 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:56:57.482369   20916 cache.go:56] Caching tarball of preloaded images
	I0819 11:56:57.482432   20916 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:56:57.482439   20916 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:56:57.482501   20916 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/embed-certs-475000/config.json ...
	I0819 11:56:57.482512   20916 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/embed-certs-475000/config.json: {Name:mkacc33447392d6183faa0024eb9353d5ed23563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:56:57.482715   20916 start.go:360] acquireMachinesLock for embed-certs-475000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:56:57.482750   20916 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "embed-certs-475000"
	I0819 11:56:57.482761   20916 start.go:93] Provisioning new machine with config: &{Name:embed-certs-475000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:56:57.482788   20916 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:56:57.492339   20916 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:56:57.510167   20916 start.go:159] libmachine.API.Create for "embed-certs-475000" (driver="qemu2")
	I0819 11:56:57.510199   20916 client.go:168] LocalClient.Create starting
	I0819 11:56:57.510266   20916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:56:57.510297   20916 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:57.510313   20916 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:57.510350   20916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:56:57.510378   20916 main.go:141] libmachine: Decoding PEM data...
	I0819 11:56:57.510387   20916 main.go:141] libmachine: Parsing certificate...
	I0819 11:56:57.510747   20916 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:56:57.663788   20916 main.go:141] libmachine: Creating SSH key...
	I0819 11:56:57.787272   20916 main.go:141] libmachine: Creating Disk image...
	I0819 11:56:57.787279   20916 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:56:57.787507   20916 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:56:57.797134   20916 main.go:141] libmachine: STDOUT: 
	I0819 11:56:57.797153   20916 main.go:141] libmachine: STDERR: 
	I0819 11:56:57.797206   20916 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2 +20000M
	I0819 11:56:57.805456   20916 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:56:57.805473   20916 main.go:141] libmachine: STDERR: 
	I0819 11:56:57.805484   20916 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:56:57.805490   20916 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:56:57.805502   20916 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:56:57.805530   20916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:20:7b:f7:e2:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:56:57.807171   20916 main.go:141] libmachine: STDOUT: 
	I0819 11:56:57.807188   20916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:56:57.807208   20916 client.go:171] duration metric: took 297.011125ms to LocalClient.Create
	I0819 11:56:59.809344   20916 start.go:128] duration metric: took 2.326579541s to createHost
	I0819 11:56:59.809409   20916 start.go:83] releasing machines lock for "embed-certs-475000", held for 2.3266985s
	W0819 11:56:59.809510   20916 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:59.816595   20916 out.go:177] * Deleting "embed-certs-475000" in qemu2 ...
	W0819 11:56:59.849519   20916 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:56:59.849551   20916 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:04.850202   20916 start.go:360] acquireMachinesLock for embed-certs-475000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:04.850448   20916 start.go:364] duration metric: took 175.292µs to acquireMachinesLock for "embed-certs-475000"
	I0819 11:57:04.850561   20916 start.go:93] Provisioning new machine with config: &{Name:embed-certs-475000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:57:04.850712   20916 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:57:04.859001   20916 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:57:04.905523   20916 start.go:159] libmachine.API.Create for "embed-certs-475000" (driver="qemu2")
	I0819 11:57:04.905593   20916 client.go:168] LocalClient.Create starting
	I0819 11:57:04.905711   20916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:57:04.905758   20916 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:04.905778   20916 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:04.905849   20916 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:57:04.905883   20916 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:04.905896   20916 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:04.906515   20916 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:57:05.076136   20916 main.go:141] libmachine: Creating SSH key...
	I0819 11:57:05.293820   20916 main.go:141] libmachine: Creating Disk image...
	I0819 11:57:05.293828   20916 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:57:05.294101   20916 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:57:05.304262   20916 main.go:141] libmachine: STDOUT: 
	I0819 11:57:05.304281   20916 main.go:141] libmachine: STDERR: 
	I0819 11:57:05.304346   20916 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2 +20000M
	I0819 11:57:05.312494   20916 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:57:05.312517   20916 main.go:141] libmachine: STDERR: 
	I0819 11:57:05.312527   20916 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:57:05.312533   20916 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:57:05.312542   20916 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:05.312574   20916 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:e7:dc:7a:9d:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:57:05.314201   20916 main.go:141] libmachine: STDOUT: 
	I0819 11:57:05.314218   20916 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:05.314229   20916 client.go:171] duration metric: took 408.639916ms to LocalClient.Create
	I0819 11:57:07.316380   20916 start.go:128] duration metric: took 2.46569775s to createHost
	I0819 11:57:07.316449   20916 start.go:83] releasing machines lock for "embed-certs-475000", held for 2.466018292s
	W0819 11:57:07.316807   20916 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:07.326878   20916 out.go:201] 
	W0819 11:57:07.339943   20916 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:07.339972   20916 out.go:270] * 
	* 
	W0819 11:57:07.342041   20916 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:07.353837   20916 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-475000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (67.614291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-113000 create -f testdata/busybox.yaml: exit status 1 (29.204042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-113000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-113000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (29.050958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (29.580041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-113000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-113000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-113000 describe deploy/metrics-server -n kube-system: exit status 1 (26.508458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-113000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-113000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (29.239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.4911755s)

                                                
                                                
-- stdout --
	* [no-preload-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-113000" primary control-plane node in "no-preload-113000" cluster
	* Restarting existing qemu2 VM for "no-preload-113000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-113000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:05.950998   20960 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:05.951122   20960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:05.951125   20960 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:05.951128   20960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:05.951251   20960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:05.952195   20960 out.go:352] Setting JSON to false
	I0819 11:57:05.968642   20960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8792,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:57:05.968721   20960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:57:05.974110   20960 out.go:177] * [no-preload-113000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:57:05.980097   20960 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:57:05.980147   20960 notify.go:220] Checking for updates...
	I0819 11:57:05.987951   20960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:57:05.991089   20960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:57:05.994116   20960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:05.997120   20960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:57:06.000062   20960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:06.003342   20960 config.go:182] Loaded profile config "no-preload-113000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:06.003631   20960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:57:06.008123   20960 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:57:06.015098   20960 start.go:297] selected driver: qemu2
	I0819 11:57:06.015104   20960 start.go:901] validating driver "qemu2" against &{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-113000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:06.015175   20960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:06.017441   20960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:57:06.017476   20960 cni.go:84] Creating CNI manager for ""
	I0819 11:57:06.017484   20960 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:57:06.017513   20960 start.go:340] cluster config:
	{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-113000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:06.021003   20960 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.029070   20960 out.go:177] * Starting "no-preload-113000" primary control-plane node in "no-preload-113000" cluster
	I0819 11:57:06.033021   20960 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:57:06.033091   20960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/no-preload-113000/config.json ...
	I0819 11:57:06.033102   20960 cache.go:107] acquiring lock: {Name:mk431ccdb49bd0ebf21fd0eeca08dfa0c11b0f0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033109   20960 cache.go:107] acquiring lock: {Name:mk8a2f0962b822c0a92d22fc44606fc5b4f745d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033111   20960 cache.go:107] acquiring lock: {Name:mka85e985d99431fa11726d885c2a35a838a53ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033160   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 11:57:06.033165   20960 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 65.875µs
	I0819 11:57:06.033168   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 11:57:06.033174   20960 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 11:57:06.033177   20960 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 70µs
	I0819 11:57:06.033179   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 11:57:06.033182   20960 cache.go:107] acquiring lock: {Name:mkeac2a9475adad12e98119ffd35f4ec1144a476 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033186   20960 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 84.375µs
	I0819 11:57:06.033191   20960 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 11:57:06.033182   20960 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 11:57:06.033188   20960 cache.go:107] acquiring lock: {Name:mk125717f02375d31ac913dcc3e0a53bb5e144ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033197   20960 cache.go:107] acquiring lock: {Name:mk27f022501ea8b412d0f3e7af66381fb8ffa923 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033220   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 11:57:06.033224   20960 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 42.875µs
	I0819 11:57:06.033228   20960 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 11:57:06.033231   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 11:57:06.033235   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 11:57:06.033235   20960 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 48.625µs
	I0819 11:57:06.033240   20960 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 11:57:06.033239   20960 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 43.25µs
	I0819 11:57:06.033244   20960 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 11:57:06.033233   20960 cache.go:107] acquiring lock: {Name:mk75ad27d9b9cd6e8da3fc6a97d616a88f6afe35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033277   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 11:57:06.033283   20960 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 50.458µs
	I0819 11:57:06.033286   20960 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 11:57:06.033287   20960 cache.go:107] acquiring lock: {Name:mk44ffc6768e11f99f88fda11cdaed25c90fd90a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:06.033336   20960 cache.go:115] /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 11:57:06.033340   20960 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 75.958µs
	I0819 11:57:06.033346   20960 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 11:57:06.033350   20960 cache.go:87] Successfully saved all images to host disk.
	I0819 11:57:06.033485   20960 start.go:360] acquireMachinesLock for no-preload-113000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:07.316622   20960 start.go:364] duration metric: took 1.283121958s to acquireMachinesLock for "no-preload-113000"
	I0819 11:57:07.316781   20960 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:07.316817   20960 fix.go:54] fixHost starting: 
	I0819 11:57:07.317627   20960 fix.go:112] recreateIfNeeded on no-preload-113000: state=Stopped err=<nil>
	W0819 11:57:07.317678   20960 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:07.335954   20960 out.go:177] * Restarting existing qemu2 VM for "no-preload-113000" ...
	I0819 11:57:07.342856   20960 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:07.343016   20960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:24:58:1c:82:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:57:07.351259   20960 main.go:141] libmachine: STDOUT: 
	I0819 11:57:07.351353   20960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:07.351502   20960 fix.go:56] duration metric: took 34.680959ms for fixHost
	I0819 11:57:07.351533   20960 start.go:83] releasing machines lock for "no-preload-113000", held for 34.879791ms
	W0819 11:57:07.351576   20960 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:07.351780   20960 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:07.351824   20960 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:12.353962   20960 start.go:360] acquireMachinesLock for no-preload-113000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:12.354346   20960 start.go:364] duration metric: took 297.875µs to acquireMachinesLock for "no-preload-113000"
	I0819 11:57:12.354484   20960 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:12.354504   20960 fix.go:54] fixHost starting: 
	I0819 11:57:12.355251   20960 fix.go:112] recreateIfNeeded on no-preload-113000: state=Stopped err=<nil>
	W0819 11:57:12.355278   20960 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:12.360015   20960 out.go:177] * Restarting existing qemu2 VM for "no-preload-113000" ...
	I0819 11:57:12.367698   20960 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:12.368034   20960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:24:58:1c:82:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/no-preload-113000/disk.qcow2
	I0819 11:57:12.376873   20960 main.go:141] libmachine: STDOUT: 
	I0819 11:57:12.376939   20960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:12.377007   20960 fix.go:56] duration metric: took 22.506458ms for fixHost
	I0819 11:57:12.377024   20960 start.go:83] releasing machines lock for "no-preload-113000", held for 22.659541ms
	W0819 11:57:12.377188   20960 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-113000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-113000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:12.385738   20960 out.go:201] 
	W0819 11:57:12.389809   20960 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:12.389833   20960 out.go:270] * 
	* 
	W0819 11:57:12.392722   20960 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:12.399735   20960 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (66.415458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-475000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-475000 create -f testdata/busybox.yaml: exit status 1 (30.321083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-475000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-475000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (29.658917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (29.491375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-475000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-475000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-475000 describe deploy/metrics-server -n kube-system: exit status 1 (26.788292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-475000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-475000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (29.894292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-475000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-475000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.201372292s)

                                                
                                                
-- stdout --
	* [embed-certs-475000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-475000" primary control-plane node in "embed-certs-475000" cluster
	* Restarting existing qemu2 VM for "embed-certs-475000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-475000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:11.389840   21001 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:11.389970   21001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:11.389973   21001 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:11.389975   21001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:11.390113   21001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:11.391129   21001 out.go:352] Setting JSON to false
	I0819 11:57:11.407218   21001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8798,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:57:11.407295   21001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:57:11.412418   21001 out.go:177] * [embed-certs-475000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:57:11.419493   21001 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:57:11.419531   21001 notify.go:220] Checking for updates...
	I0819 11:57:11.426291   21001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:57:11.434377   21001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:57:11.438422   21001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:11.441434   21001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:57:11.445439   21001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:11.449627   21001 config.go:182] Loaded profile config "embed-certs-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:11.449954   21001 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:57:11.454480   21001 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:57:11.461393   21001 start.go:297] selected driver: qemu2
	I0819 11:57:11.461400   21001 start.go:901] validating driver "qemu2" against &{Name:embed-certs-475000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-475000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:11.461466   21001 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:11.463803   21001 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:57:11.463848   21001 cni.go:84] Creating CNI manager for ""
	I0819 11:57:11.463855   21001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:57:11.463882   21001 start.go:340] cluster config:
	{Name:embed-certs-475000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-475000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:11.467428   21001 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:11.475306   21001 out.go:177] * Starting "embed-certs-475000" primary control-plane node in "embed-certs-475000" cluster
	I0819 11:57:11.479480   21001 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:57:11.479498   21001 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:57:11.479508   21001 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:11.479582   21001 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:57:11.479588   21001 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:57:11.479661   21001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/embed-certs-475000/config.json ...
	I0819 11:57:11.480120   21001 start.go:360] acquireMachinesLock for embed-certs-475000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:11.480152   21001 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "embed-certs-475000"
	I0819 11:57:11.480162   21001 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:11.480168   21001 fix.go:54] fixHost starting: 
	I0819 11:57:11.480301   21001 fix.go:112] recreateIfNeeded on embed-certs-475000: state=Stopped err=<nil>
	W0819 11:57:11.480310   21001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:11.484281   21001 out.go:177] * Restarting existing qemu2 VM for "embed-certs-475000" ...
	I0819 11:57:11.492403   21001 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:11.492459   21001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:e7:dc:7a:9d:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:57:11.494679   21001 main.go:141] libmachine: STDOUT: 
	I0819 11:57:11.494705   21001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:11.494734   21001 fix.go:56] duration metric: took 14.566292ms for fixHost
	I0819 11:57:11.494738   21001 start.go:83] releasing machines lock for "embed-certs-475000", held for 14.582208ms
	W0819 11:57:11.494746   21001 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:11.494781   21001 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:11.494787   21001 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:16.496881   21001 start.go:360] acquireMachinesLock for embed-certs-475000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:16.497325   21001 start.go:364] duration metric: took 348.209µs to acquireMachinesLock for "embed-certs-475000"
	I0819 11:57:16.497462   21001 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:16.497488   21001 fix.go:54] fixHost starting: 
	I0819 11:57:16.498217   21001 fix.go:112] recreateIfNeeded on embed-certs-475000: state=Stopped err=<nil>
	W0819 11:57:16.498245   21001 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:16.514808   21001 out.go:177] * Restarting existing qemu2 VM for "embed-certs-475000" ...
	I0819 11:57:16.518636   21001 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:16.518854   21001 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:e7:dc:7a:9d:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/embed-certs-475000/disk.qcow2
	I0819 11:57:16.528224   21001 main.go:141] libmachine: STDOUT: 
	I0819 11:57:16.528285   21001 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:16.528368   21001 fix.go:56] duration metric: took 30.886792ms for fixHost
	I0819 11:57:16.528387   21001 start.go:83] releasing machines lock for "embed-certs-475000", held for 31.040834ms
	W0819 11:57:16.528537   21001 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:16.535579   21001 out.go:201] 
	W0819 11:57:16.538629   21001 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:16.538653   21001 out.go:270] * 
	* 
	W0819 11:57:16.540926   21001 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:16.550542   21001 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-475000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (67.483458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-113000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (32.78475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-113000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-113000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-113000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.404291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-113000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-113000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (28.872875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-113000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (29.846292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-113000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-113000 --alsologtostderr -v=1: exit status 83 (40.576625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-113000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-113000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:12.669747   21020 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:12.669898   21020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:12.669907   21020 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:12.669910   21020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:12.670029   21020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:12.670254   21020 out.go:352] Setting JSON to false
	I0819 11:57:12.670261   21020 mustload.go:65] Loading cluster: no-preload-113000
	I0819 11:57:12.670462   21020 config.go:182] Loaded profile config "no-preload-113000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:12.674847   21020 out.go:177] * The control-plane node no-preload-113000 host is not running: state=Stopped
	I0819 11:57:12.678873   21020 out.go:177]   To start a cluster, run: "minikube start -p no-preload-113000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-113000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (29.545875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (29.100416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-113000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-954000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-954000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.765152459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-954000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-954000" primary control-plane node in "default-k8s-diff-port-954000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-954000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:13.100078   21044 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:13.100224   21044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:13.100227   21044 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:13.100230   21044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:13.100367   21044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:13.101401   21044 out.go:352] Setting JSON to false
	I0819 11:57:13.117541   21044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8800,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:57:13.117609   21044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:57:13.122859   21044 out.go:177] * [default-k8s-diff-port-954000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:57:13.129853   21044 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:57:13.129895   21044 notify.go:220] Checking for updates...
	I0819 11:57:13.136889   21044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:57:13.140791   21044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:57:13.143832   21044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:13.146891   21044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:57:13.149822   21044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:13.153137   21044 config.go:182] Loaded profile config "embed-certs-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:13.153201   21044 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:13.153248   21044 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:57:13.157819   21044 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:57:13.164853   21044 start.go:297] selected driver: qemu2
	I0819 11:57:13.164860   21044 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:57:13.164868   21044 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:13.167263   21044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:57:13.169858   21044 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:57:13.172820   21044 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:57:13.172855   21044 cni.go:84] Creating CNI manager for ""
	I0819 11:57:13.172865   21044 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:57:13.172872   21044 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:57:13.172903   21044 start.go:340] cluster config:
	{Name:default-k8s-diff-port-954000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-954000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:13.176636   21044 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:13.184670   21044 out.go:177] * Starting "default-k8s-diff-port-954000" primary control-plane node in "default-k8s-diff-port-954000" cluster
	I0819 11:57:13.188767   21044 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:57:13.188782   21044 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:57:13.188790   21044 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:13.188857   21044 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:57:13.188863   21044 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:57:13.188916   21044 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/default-k8s-diff-port-954000/config.json ...
	I0819 11:57:13.188928   21044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/default-k8s-diff-port-954000/config.json: {Name:mk2b2bf6745470073a1d02a90e57d4634f3277ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:13.189171   21044 start.go:360] acquireMachinesLock for default-k8s-diff-port-954000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:13.189210   21044 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "default-k8s-diff-port-954000"
	I0819 11:57:13.189222   21044 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-954000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-954000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:57:13.189262   21044 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:57:13.196846   21044 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:57:13.215221   21044 start.go:159] libmachine.API.Create for "default-k8s-diff-port-954000" (driver="qemu2")
	I0819 11:57:13.215245   21044 client.go:168] LocalClient.Create starting
	I0819 11:57:13.215314   21044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:57:13.215346   21044 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:13.215354   21044 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:13.215389   21044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:57:13.215413   21044 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:13.215422   21044 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:13.215862   21044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:57:13.371795   21044 main.go:141] libmachine: Creating SSH key...
	I0819 11:57:13.398602   21044 main.go:141] libmachine: Creating Disk image...
	I0819 11:57:13.398608   21044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:57:13.398837   21044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:13.408249   21044 main.go:141] libmachine: STDOUT: 
	I0819 11:57:13.408268   21044 main.go:141] libmachine: STDERR: 
	I0819 11:57:13.408321   21044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2 +20000M
	I0819 11:57:13.416411   21044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:57:13.416429   21044 main.go:141] libmachine: STDERR: 
	I0819 11:57:13.416454   21044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:13.416460   21044 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:57:13.416470   21044 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:13.416500   21044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:d3:f9:07:32:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:13.418102   21044 main.go:141] libmachine: STDOUT: 
	I0819 11:57:13.418116   21044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:13.418135   21044 client.go:171] duration metric: took 202.888458ms to LocalClient.Create
	I0819 11:57:15.420276   21044 start.go:128] duration metric: took 2.23103375s to createHost
	I0819 11:57:15.420348   21044 start.go:83] releasing machines lock for "default-k8s-diff-port-954000", held for 2.231175417s
	W0819 11:57:15.420440   21044 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:15.427676   21044 out.go:177] * Deleting "default-k8s-diff-port-954000" in qemu2 ...
	W0819 11:57:15.456682   21044 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:15.456705   21044 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:20.458863   21044 start.go:360] acquireMachinesLock for default-k8s-diff-port-954000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:20.459441   21044 start.go:364] duration metric: took 445.666µs to acquireMachinesLock for "default-k8s-diff-port-954000"
	I0819 11:57:20.459605   21044 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-954000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-954000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:57:20.459907   21044 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:57:20.465567   21044 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:57:20.516530   21044 start.go:159] libmachine.API.Create for "default-k8s-diff-port-954000" (driver="qemu2")
	I0819 11:57:20.516579   21044 client.go:168] LocalClient.Create starting
	I0819 11:57:20.516703   21044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:57:20.516778   21044 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:20.516801   21044 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:20.516880   21044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:57:20.516924   21044 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:20.516938   21044 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:20.517457   21044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:57:20.684333   21044 main.go:141] libmachine: Creating SSH key...
	I0819 11:57:20.763177   21044 main.go:141] libmachine: Creating Disk image...
	I0819 11:57:20.763187   21044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:57:20.763374   21044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:20.772682   21044 main.go:141] libmachine: STDOUT: 
	I0819 11:57:20.772702   21044 main.go:141] libmachine: STDERR: 
	I0819 11:57:20.772749   21044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2 +20000M
	I0819 11:57:20.780688   21044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:57:20.780705   21044 main.go:141] libmachine: STDERR: 
	I0819 11:57:20.780720   21044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:20.780725   21044 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:57:20.780734   21044 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:20.780759   21044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e1:11:29:dd:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:20.782410   21044 main.go:141] libmachine: STDOUT: 
	I0819 11:57:20.782426   21044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:20.782448   21044 client.go:171] duration metric: took 265.868125ms to LocalClient.Create
	I0819 11:57:22.784588   21044 start.go:128] duration metric: took 2.324702s to createHost
	I0819 11:57:22.784697   21044 start.go:83] releasing machines lock for "default-k8s-diff-port-954000", held for 2.32527925s
	W0819 11:57:22.785033   21044 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:22.804611   21044 out.go:201] 
	W0819 11:57:22.811615   21044 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:22.811697   21044 out.go:270] * 
	* 
	W0819 11:57:22.814240   21044 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:22.822040   21044 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-954000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (63.986125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-475000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (32.505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-475000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-475000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-475000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.096208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-475000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-475000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (28.639542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-475000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (29.280584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-475000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-475000 --alsologtostderr -v=1: exit status 83 (40.991292ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-475000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-475000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:16.818304   21066 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:16.818455   21066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:16.818458   21066 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:16.818461   21066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:16.818587   21066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:16.818812   21066 out.go:352] Setting JSON to false
	I0819 11:57:16.818820   21066 mustload.go:65] Loading cluster: embed-certs-475000
	I0819 11:57:16.819003   21066 config.go:182] Loaded profile config "embed-certs-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:16.823693   21066 out.go:177] * The control-plane node embed-certs-475000 host is not running: state=Stopped
	I0819 11:57:16.827555   21066 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-475000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-475000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (29.20975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (29.577083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-761000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-761000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.850892125s)

                                                
                                                
-- stdout --
	* [newest-cni-761000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-761000" primary control-plane node in "newest-cni-761000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-761000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:17.135101   21083 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:17.135222   21083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:17.135225   21083 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:17.135228   21083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:17.135361   21083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:17.136451   21083 out.go:352] Setting JSON to false
	I0819 11:57:17.152662   21083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8804,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:57:17.152736   21083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:57:17.157559   21083 out.go:177] * [newest-cni-761000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:57:17.164382   21083 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:57:17.164431   21083 notify.go:220] Checking for updates...
	I0819 11:57:17.170596   21083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:57:17.172081   21083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:57:17.175511   21083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:17.178584   21083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:57:17.181577   21083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:17.184961   21083 config.go:182] Loaded profile config "default-k8s-diff-port-954000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:17.185024   21083 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:17.185070   21083 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:57:17.189535   21083 out.go:177] * Using the qemu2 driver based on user configuration
	I0819 11:57:17.196536   21083 start.go:297] selected driver: qemu2
	I0819 11:57:17.196542   21083 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:57:17.196548   21083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:17.198909   21083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0819 11:57:17.198931   21083 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0819 11:57:17.202577   21083 out.go:177] * Automatically selected the socket_vmnet network
	I0819 11:57:17.209601   21083 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 11:57:17.209618   21083 cni.go:84] Creating CNI manager for ""
	I0819 11:57:17.209625   21083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:57:17.209634   21083 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:57:17.209657   21083 start.go:340] cluster config:
	{Name:newest-cni-761000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:17.213442   21083 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:17.221524   21083 out.go:177] * Starting "newest-cni-761000" primary control-plane node in "newest-cni-761000" cluster
	I0819 11:57:17.225544   21083 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:57:17.225558   21083 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:57:17.225568   21083 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:17.225638   21083 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:57:17.225644   21083 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:57:17.225712   21083 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/newest-cni-761000/config.json ...
	I0819 11:57:17.225728   21083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/newest-cni-761000/config.json: {Name:mked0d5937cf77d0006d291398f5e7a4fa624866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:57:17.225971   21083 start.go:360] acquireMachinesLock for newest-cni-761000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:17.226006   21083 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "newest-cni-761000"
	I0819 11:57:17.226019   21083 start.go:93] Provisioning new machine with config: &{Name:newest-cni-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:57:17.226054   21083 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:57:17.233533   21083 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:57:17.251465   21083 start.go:159] libmachine.API.Create for "newest-cni-761000" (driver="qemu2")
	I0819 11:57:17.251489   21083 client.go:168] LocalClient.Create starting
	I0819 11:57:17.251555   21083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:57:17.251589   21083 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:17.251602   21083 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:17.251643   21083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:57:17.251666   21083 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:17.251675   21083 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:17.252033   21083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:57:17.407680   21083 main.go:141] libmachine: Creating SSH key...
	I0819 11:57:17.451400   21083 main.go:141] libmachine: Creating Disk image...
	I0819 11:57:17.451405   21083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:57:17.451621   21083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:17.460796   21083 main.go:141] libmachine: STDOUT: 
	I0819 11:57:17.460818   21083 main.go:141] libmachine: STDERR: 
	I0819 11:57:17.460856   21083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2 +20000M
	I0819 11:57:17.468600   21083 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:57:17.468615   21083 main.go:141] libmachine: STDERR: 
	I0819 11:57:17.468634   21083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:17.468638   21083 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:57:17.468651   21083 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:17.468681   21083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:b0:40:5f:1e:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:17.470248   21083 main.go:141] libmachine: STDOUT: 
	I0819 11:57:17.470270   21083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:17.470290   21083 client.go:171] duration metric: took 218.801625ms to LocalClient.Create
	I0819 11:57:19.472466   21083 start.go:128] duration metric: took 2.246430625s to createHost
	I0819 11:57:19.472570   21083 start.go:83] releasing machines lock for "newest-cni-761000", held for 2.246600917s
	W0819 11:57:19.472624   21083 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:19.479558   21083 out.go:177] * Deleting "newest-cni-761000" in qemu2 ...
	W0819 11:57:19.515911   21083 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:19.515941   21083 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:24.517265   21083 start.go:360] acquireMachinesLock for newest-cni-761000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:24.517828   21083 start.go:364] duration metric: took 352.125µs to acquireMachinesLock for "newest-cni-761000"
	I0819 11:57:24.518027   21083 start.go:93] Provisioning new machine with config: &{Name:newest-cni-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0819 11:57:24.518333   21083 start.go:125] createHost starting for "" (driver="qemu2")
	I0819 11:57:24.527891   21083 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0819 11:57:24.578496   21083 start.go:159] libmachine.API.Create for "newest-cni-761000" (driver="qemu2")
	I0819 11:57:24.578541   21083 client.go:168] LocalClient.Create starting
	I0819 11:57:24.578648   21083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/ca.pem
	I0819 11:57:24.578702   21083 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:24.578721   21083 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:24.578787   21083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-17178/.minikube/certs/cert.pem
	I0819 11:57:24.578818   21083 main.go:141] libmachine: Decoding PEM data...
	I0819 11:57:24.578830   21083 main.go:141] libmachine: Parsing certificate...
	I0819 11:57:24.579489   21083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0819 11:57:24.739359   21083 main.go:141] libmachine: Creating SSH key...
	I0819 11:57:24.895396   21083 main.go:141] libmachine: Creating Disk image...
	I0819 11:57:24.895402   21083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0819 11:57:24.895648   21083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:24.905222   21083 main.go:141] libmachine: STDOUT: 
	I0819 11:57:24.905238   21083 main.go:141] libmachine: STDERR: 
	I0819 11:57:24.905291   21083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2 +20000M
	I0819 11:57:24.913099   21083 main.go:141] libmachine: STDOUT: Image resized.
	
	I0819 11:57:24.913114   21083 main.go:141] libmachine: STDERR: 
	I0819 11:57:24.913122   21083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:24.913125   21083 main.go:141] libmachine: Starting QEMU VM...
	I0819 11:57:24.913143   21083 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:24.913170   21083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:6b:cf:ed:2f:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:24.914688   21083 main.go:141] libmachine: STDOUT: 
	I0819 11:57:24.914704   21083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:24.914717   21083 client.go:171] duration metric: took 336.176875ms to LocalClient.Create
	I0819 11:57:26.916823   21083 start.go:128] duration metric: took 2.398514416s to createHost
	I0819 11:57:26.916877   21083 start.go:83] releasing machines lock for "newest-cni-761000", held for 2.399054709s
	W0819 11:57:26.917134   21083 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:26.927614   21083 out.go:201] 
	W0819 11:57:26.931690   21083 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:26.931705   21083 out.go:270] * 
	* 
	W0819 11:57:26.933085   21083 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:26.942144   21083 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-761000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000: exit status 7 (65.361542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-954000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-954000 create -f testdata/busybox.yaml: exit status 1 (29.479958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-954000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-954000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (28.583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (28.947292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-954000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-954000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-954000 describe deploy/metrics-server -n kube-system: exit status 1 (27.169416ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-954000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-954000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (28.680583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-954000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-954000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.197665958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-954000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-954000" primary control-plane node in "default-k8s-diff-port-954000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-954000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-954000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:27.148176   21142 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:27.148296   21142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:27.148301   21142 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:27.148303   21142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:27.148428   21142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:27.151787   21142 out.go:352] Setting JSON to false
	I0819 11:57:27.168493   21142 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8814,"bootTime":1724085033,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:57:27.168559   21142 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:57:27.172526   21142 out.go:177] * [default-k8s-diff-port-954000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:57:27.179595   21142 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:57:27.179634   21142 notify.go:220] Checking for updates...
	I0819 11:57:27.186521   21142 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:57:27.189586   21142 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:57:27.192615   21142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:27.195640   21142 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:57:27.202579   21142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:27.207063   21142 config.go:182] Loaded profile config "default-k8s-diff-port-954000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:27.207361   21142 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:57:27.211542   21142 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:57:27.217577   21142 start.go:297] selected driver: qemu2
	I0819 11:57:27.217586   21142 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-954000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-954000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:27.217663   21142 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:27.220149   21142 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:57:27.220202   21142 cni.go:84] Creating CNI manager for ""
	I0819 11:57:27.220221   21142 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:57:27.220257   21142 start.go:340] cluster config:
	{Name:default-k8s-diff-port-954000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-954000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:27.223870   21142 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:27.231609   21142 out.go:177] * Starting "default-k8s-diff-port-954000" primary control-plane node in "default-k8s-diff-port-954000" cluster
	I0819 11:57:27.235620   21142 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:57:27.235636   21142 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:57:27.235647   21142 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:27.235704   21142 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:57:27.235709   21142 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:57:27.235787   21142 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/default-k8s-diff-port-954000/config.json ...
	I0819 11:57:27.236259   21142 start.go:360] acquireMachinesLock for default-k8s-diff-port-954000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:27.236287   21142 start.go:364] duration metric: took 22.292µs to acquireMachinesLock for "default-k8s-diff-port-954000"
	I0819 11:57:27.236297   21142 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:27.236303   21142 fix.go:54] fixHost starting: 
	I0819 11:57:27.236427   21142 fix.go:112] recreateIfNeeded on default-k8s-diff-port-954000: state=Stopped err=<nil>
	W0819 11:57:27.236436   21142 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:27.240602   21142 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-954000" ...
	I0819 11:57:27.248499   21142 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:27.248535   21142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e1:11:29:dd:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:27.250610   21142 main.go:141] libmachine: STDOUT: 
	I0819 11:57:27.250632   21142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:27.250664   21142 fix.go:56] duration metric: took 14.361166ms for fixHost
	I0819 11:57:27.250671   21142 start.go:83] releasing machines lock for "default-k8s-diff-port-954000", held for 14.379542ms
	W0819 11:57:27.250678   21142 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:27.250724   21142 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:27.250729   21142 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:32.251819   21142 start.go:360] acquireMachinesLock for default-k8s-diff-port-954000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:32.252303   21142 start.go:364] duration metric: took 352.708µs to acquireMachinesLock for "default-k8s-diff-port-954000"
	I0819 11:57:32.252556   21142 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:32.252578   21142 fix.go:54] fixHost starting: 
	I0819 11:57:32.253448   21142 fix.go:112] recreateIfNeeded on default-k8s-diff-port-954000: state=Stopped err=<nil>
	W0819 11:57:32.253479   21142 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:32.262825   21142 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-954000" ...
	I0819 11:57:32.266972   21142 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:32.267219   21142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e1:11:29:dd:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/default-k8s-diff-port-954000/disk.qcow2
	I0819 11:57:32.276003   21142 main.go:141] libmachine: STDOUT: 
	I0819 11:57:32.276063   21142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:32.276137   21142 fix.go:56] duration metric: took 23.559834ms for fixHost
	I0819 11:57:32.276161   21142 start.go:83] releasing machines lock for "default-k8s-diff-port-954000", held for 23.721833ms
	W0819 11:57:32.276324   21142 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-954000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-954000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:32.282998   21142 out.go:201] 
	W0819 11:57:32.287071   21142 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:32.287096   21142 out.go:270] * 
	* 
	W0819 11:57:32.289851   21142 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:32.300104   21142 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-954000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (66.404625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-761000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-761000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.185519708s)

                                                
                                                
-- stdout --
	* [newest-cni-761000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-761000" primary control-plane node in "newest-cni-761000" cluster
	* Restarting existing qemu2 VM for "newest-cni-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-761000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:31.205238   21172 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:31.205358   21172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:31.205368   21172 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:31.205372   21172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:31.205502   21172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:31.206504   21172 out.go:352] Setting JSON to false
	I0819 11:57:31.223608   21172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8818,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:57:31.223679   21172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:57:31.228455   21172 out.go:177] * [newest-cni-761000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:57:31.235438   21172 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:57:31.235503   21172 notify.go:220] Checking for updates...
	I0819 11:57:31.242429   21172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:57:31.245408   21172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:57:31.248411   21172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:57:31.251396   21172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:57:31.254399   21172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:57:31.257674   21172 config.go:182] Loaded profile config "newest-cni-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:31.257976   21172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:57:31.262276   21172 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:57:31.269397   21172 start.go:297] selected driver: qemu2
	I0819 11:57:31.269405   21172 start.go:901] validating driver "qemu2" against &{Name:newest-cni-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:31.269472   21172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:57:31.271845   21172 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0819 11:57:31.271886   21172 cni.go:84] Creating CNI manager for ""
	I0819 11:57:31.271893   21172 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:57:31.271919   21172 start.go:340] cluster config:
	{Name:newest-cni-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-761000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:57:31.275544   21172 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:57:31.284320   21172 out.go:177] * Starting "newest-cni-761000" primary control-plane node in "newest-cni-761000" cluster
	I0819 11:57:31.287333   21172 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:57:31.287347   21172 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:57:31.287352   21172 cache.go:56] Caching tarball of preloaded images
	I0819 11:57:31.287401   21172 preload.go:172] Found /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:57:31.287406   21172 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0819 11:57:31.287463   21172 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/newest-cni-761000/config.json ...
	I0819 11:57:31.287875   21172 start.go:360] acquireMachinesLock for newest-cni-761000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:31.287902   21172 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "newest-cni-761000"
	I0819 11:57:31.287911   21172 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:31.287915   21172 fix.go:54] fixHost starting: 
	I0819 11:57:31.288040   21172 fix.go:112] recreateIfNeeded on newest-cni-761000: state=Stopped err=<nil>
	W0819 11:57:31.288049   21172 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:31.292388   21172 out.go:177] * Restarting existing qemu2 VM for "newest-cni-761000" ...
	I0819 11:57:31.299413   21172 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:31.299453   21172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:6b:cf:ed:2f:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:31.301557   21172 main.go:141] libmachine: STDOUT: 
	I0819 11:57:31.301575   21172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:31.301596   21172 fix.go:56] duration metric: took 13.680334ms for fixHost
	I0819 11:57:31.301600   21172 start.go:83] releasing machines lock for "newest-cni-761000", held for 13.694375ms
	W0819 11:57:31.301607   21172 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:31.301638   21172 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:31.301643   21172 start.go:729] Will try again in 5 seconds ...
	I0819 11:57:36.303773   21172 start.go:360] acquireMachinesLock for newest-cni-761000: {Name:mk51682daf9d132f21e1aba1b32ed96e7f05425f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0819 11:57:36.304322   21172 start.go:364] duration metric: took 443.167µs to acquireMachinesLock for "newest-cni-761000"
	I0819 11:57:36.304475   21172 start.go:96] Skipping create...Using existing machine configuration
	I0819 11:57:36.304497   21172 fix.go:54] fixHost starting: 
	I0819 11:57:36.305267   21172 fix.go:112] recreateIfNeeded on newest-cni-761000: state=Stopped err=<nil>
	W0819 11:57:36.305294   21172 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 11:57:36.310687   21172 out.go:177] * Restarting existing qemu2 VM for "newest-cni-761000" ...
	I0819 11:57:36.316683   21172 qemu.go:418] Using hvf for hardware acceleration
	I0819 11:57:36.317020   21172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:6b:cf:ed:2f:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19423-17178/.minikube/machines/newest-cni-761000/disk.qcow2
	I0819 11:57:36.326763   21172 main.go:141] libmachine: STDOUT: 
	I0819 11:57:36.326837   21172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0819 11:57:36.326938   21172 fix.go:56] duration metric: took 22.439ms for fixHost
	I0819 11:57:36.326959   21172 start.go:83] releasing machines lock for "newest-cni-761000", held for 22.6105ms
	W0819 11:57:36.327123   21172 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-761000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0819 11:57:36.335658   21172 out.go:201] 
	W0819 11:57:36.338815   21172 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0819 11:57:36.338840   21172 out.go:270] * 
	* 
	W0819 11:57:36.341484   21172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:57:36.349522   21172 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-761000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000: exit status 7 (69.82225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-954000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (31.725584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-954000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-954000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-954000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.852875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-954000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-954000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (28.887542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-954000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (28.832875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-954000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-954000 --alsologtostderr -v=1: exit status 83 (40.718916ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-954000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-954000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:32.565460   21191 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:32.565605   21191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:32.565608   21191 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:32.565611   21191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:32.565736   21191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:32.565948   21191 out.go:352] Setting JSON to false
	I0819 11:57:32.565959   21191 mustload.go:65] Loading cluster: default-k8s-diff-port-954000
	I0819 11:57:32.566142   21191 config.go:182] Loaded profile config "default-k8s-diff-port-954000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:32.570421   21191 out.go:177] * The control-plane node default-k8s-diff-port-954000 host is not running: state=Stopped
	I0819 11:57:32.574419   21191 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-954000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-954000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (28.58925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (29.778833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-954000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-761000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000: exit status 7 (30.867292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-761000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-761000 --alsologtostderr -v=1: exit status 83 (41.652958ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-761000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-761000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:57:36.537255   21215 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:57:36.537408   21215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:36.537411   21215 out.go:358] Setting ErrFile to fd 2...
	I0819 11:57:36.537414   21215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:57:36.537546   21215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:57:36.537774   21215 out.go:352] Setting JSON to false
	I0819 11:57:36.537781   21215 mustload.go:65] Loading cluster: newest-cni-761000
	I0819 11:57:36.537962   21215 config.go:182] Loaded profile config "newest-cni-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:57:36.542130   21215 out.go:177] * The control-plane node newest-cni-761000 host is not running: state=Stopped
	I0819 11:57:36.546079   21215 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-761000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-761000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000: exit status 7 (29.635083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-761000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000: exit status 7 (30.692625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-761000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.14
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.0/json-events 7.13
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.14
18 TestDownloadOnly/v1.31.0/DeleteAll 0.14
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.44
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 10.51
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 5.92
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.87
55 TestFunctional/serial/CacheCmd/cache/add_local 1.05
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.23
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.09
93 TestFunctional/parallel/License 0.26
100 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
110 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
111 TestFunctional/parallel/ProfileCmd/profile_list 0.08
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
116 TestFunctional/parallel/Version/short 0.04
123 TestFunctional/parallel/ImageCommands/Setup 1.76
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.15
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 1.14
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.54
258 TestNoKubernetes/serial/Stop 3.69
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
269 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
275 TestStartStop/group/old-k8s-version/serial/Stop 1.9
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
288 TestStartStop/group/no-preload/serial/Stop 1.85
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
293 TestStartStop/group/embed-certs/serial/Stop 3.6
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.88
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.08
314 TestStartStop/group/newest-cni/serial/Stop 3.95
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-927000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-927000: exit status 85 (139.278375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |          |
	|         | -p download-only-927000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:31:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:31:15.901494   17656 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:15.901657   17656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:15.901661   17656 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:15.901663   17656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:15.901792   17656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	W0819 11:31:15.901892   17656 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-17178/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-17178/.minikube/config/config.json: no such file or directory
	I0819 11:31:15.903242   17656 out.go:352] Setting JSON to true
	I0819 11:31:15.921312   17656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7242,"bootTime":1724085033,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:31:15.921386   17656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:31:15.926530   17656 out.go:97] [download-only-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:31:15.926651   17656 notify.go:220] Checking for updates...
	W0819 11:31:15.926704   17656 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:31:15.930937   17656 out.go:169] MINIKUBE_LOCATION=19423
	I0819 11:31:15.942542   17656 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:31:15.946504   17656 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:31:15.950504   17656 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:15.953563   17656 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	W0819 11:31:15.959487   17656 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:31:15.959680   17656 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:31:15.963518   17656 out.go:97] Using the qemu2 driver based on user configuration
	I0819 11:31:15.963537   17656 start.go:297] selected driver: qemu2
	I0819 11:31:15.963551   17656 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:31:15.963623   17656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:15.966520   17656 out.go:169] Automatically selected the socket_vmnet network
	I0819 11:31:15.972768   17656 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 11:31:15.972882   17656 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:31:15.972953   17656 cni.go:84] Creating CNI manager for ""
	I0819 11:31:15.972972   17656 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0819 11:31:15.973029   17656 start.go:340] cluster config:
	{Name:download-only-927000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-927000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:15.977217   17656 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:31:15.982040   17656 out.go:97] Downloading VM boot image ...
	I0819 11:31:15.982067   17656 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0819 11:31:21.440537   17656 out.go:97] Starting "download-only-927000" primary control-plane node in "download-only-927000" cluster
	I0819 11:31:21.440557   17656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:31:21.502017   17656 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:31:21.502037   17656 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:21.502445   17656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:31:21.507279   17656 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:31:21.507286   17656 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:21.595304   17656 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0819 11:31:27.305607   17656 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:27.305771   17656 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:28.016452   17656 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0819 11:31:28.016651   17656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/download-only-927000/config.json ...
	I0819 11:31:28.016667   17656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-17178/.minikube/profiles/download-only-927000/config.json: {Name:mk1b90f843dc74d3542d212ada55937598e4262b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:28.017094   17656 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0819 11:31:28.017282   17656 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0819 11:31:28.504505   17656 out.go:193] 
	W0819 11:31:28.510764   17656 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19423-17178/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0 0x104a039a0] Decompressors:map[bz2:0x140003e1920 gz:0x140003e1928 tar:0x140003e18d0 tar.bz2:0x140003e18e0 tar.gz:0x140003e18f0 tar.xz:0x140003e1900 tar.zst:0x140003e1910 tbz2:0x140003e18e0 tgz:0x140003e18f0 txz:0x140003e1900 tzst:0x140003e1910 xz:0x140003e1930 zip:0x140003e1940 zst:0x140003e1938] Getters:map[file:0x1400176a610 http:0x140001722d0 https:0x14000172320] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0819 11:31:28.510807   17656 out_reason.go:110] 
	W0819 11:31:28.521708   17656 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 11:31:28.526505   17656 out.go:193] 
	
	
	* The control-plane node download-only-927000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-927000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-927000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-333000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-333000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (7.130789125s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-333000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-333000: exit status 85 (135.809917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | -p download-only-927000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| delete  | -p download-only-927000        | download-only-927000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT | 19 Aug 24 11:31 PDT |
	| start   | -o=json --download-only        | download-only-333000 | jenkins | v1.33.1 | 19 Aug 24 11:31 PDT |                     |
	|         | -p download-only-333000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:31:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:31:29.018511   17713 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:29.018715   17713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:29.018756   17713 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:29.018762   17713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:29.018891   17713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:31:29.020219   17713 out.go:352] Setting JSON to true
	I0819 11:31:29.039370   17713 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7256,"bootTime":1724085033,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:31:29.039445   17713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:31:29.044315   17713 out.go:97] [download-only-333000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:31:29.044390   17713 notify.go:220] Checking for updates...
	I0819 11:31:29.048406   17713 out.go:169] MINIKUBE_LOCATION=19423
	I0819 11:31:29.057450   17713 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:31:29.062458   17713 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:31:29.066438   17713 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:29.070407   17713 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	W0819 11:31:29.078012   17713 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:31:29.078185   17713 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:31:29.082425   17713 out.go:97] Using the qemu2 driver based on user configuration
	I0819 11:31:29.082433   17713 start.go:297] selected driver: qemu2
	I0819 11:31:29.082437   17713 start.go:901] validating driver "qemu2" against <nil>
	I0819 11:31:29.082476   17713 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:29.086090   17713 out.go:169] Automatically selected the socket_vmnet network
	I0819 11:31:29.093061   17713 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0819 11:31:29.093176   17713 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:31:29.093215   17713 cni.go:84] Creating CNI manager for ""
	I0819 11:31:29.093232   17713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0819 11:31:29.093239   17713 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0819 11:31:29.093287   17713 start.go:340] cluster config:
	{Name:download-only-333000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-333000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:29.097232   17713 iso.go:125] acquiring lock: {Name:mk19d2f9dcacbbe3e95275f970a96b2d84f09461 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:31:29.102750   17713 out.go:97] Starting "download-only-333000" primary control-plane node in "download-only-333000" cluster
	I0819 11:31:29.102757   17713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:31:29.161198   17713 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0819 11:31:29.161221   17713 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:29.161442   17713 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0819 11:31:29.166636   17713 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 11:31:29.166644   17713 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0819 11:31:29.254796   17713 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19423-17178/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-333000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-333000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-333000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.44s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-275000 --alsologtostderr --binary-mirror http://127.0.0.1:52924 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-275000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-275000
--- PASS: TestBinaryMirror (0.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-698000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-698000: exit status 85 (67.586875ms)

                                                
                                                
-- stdout --
	* Profile "addons-698000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-698000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-698000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-698000: exit status 85 (71.267792ms)

                                                
                                                
-- stdout --
	* Profile "addons-698000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-698000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.51s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status: exit status 7 (31.317209ms)

                                                
                                                
-- stdout --
	nospam-555000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status: exit status 7 (30.006209ms)

                                                
                                                
-- stdout --
	nospam-555000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status: exit status 7 (30.088167ms)

                                                
                                                
-- stdout --
	nospam-555000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause: exit status 83 (38.596917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-555000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause: exit status 83 (38.938083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-555000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause: exit status 83 (39.882625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-555000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause: exit status 83 (39.690667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-555000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause: exit status 83 (37.527083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-555000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause: exit status 83 (39.139833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-555000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-555000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (5.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 stop: (1.899106583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 stop: (2.119457417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-555000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-555000 stop: (1.895907875s)
--- PASS: TestErrorSpam/stop (5.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19423-17178/.minikube/files/etc/test/nested/copy/17654/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3201653498/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cache add minikube-local-cache-test:functional-944000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 cache delete minikube-local-cache-test:functional-944000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-944000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 config get cpus: exit status 14 (31.690042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 config get cpus: exit status 14 (28.363375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-944000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-944000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.962792ms)

                                                
                                                
-- stdout --
	* [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:33:07.770924   18215 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:33:07.771062   18215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:07.771065   18215 out.go:358] Setting ErrFile to fd 2...
	I0819 11:33:07.771067   18215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:07.771190   18215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:33:07.772185   18215 out.go:352] Setting JSON to false
	I0819 11:33:07.788233   18215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7354,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:33:07.788308   18215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:33:07.792885   18215 out.go:177] * [functional-944000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0819 11:33:07.799916   18215 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:33:07.799949   18215 notify.go:220] Checking for updates...
	I0819 11:33:07.806879   18215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:33:07.809885   18215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:33:07.812924   18215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:33:07.814178   18215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:33:07.816861   18215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:33:07.820138   18215 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:33:07.820412   18215 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:33:07.824722   18215 out.go:177] * Using the qemu2 driver based on existing profile
	I0819 11:33:07.831891   18215 start.go:297] selected driver: qemu2
	I0819 11:33:07.831898   18215 start.go:901] validating driver "qemu2" against &{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:33:07.831945   18215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:33:07.836869   18215 out.go:201] 
	W0819 11:33:07.840819   18215 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 11:33:07.844917   18215 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-944000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-944000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-944000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.484792ms)

                                                
                                                
-- stdout --
	* [functional-944000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:33:07.657059   18211 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:33:07.657180   18211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:07.657183   18211 out.go:358] Setting ErrFile to fd 2...
	I0819 11:33:07.657186   18211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:33:07.657318   18211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-17178/.minikube/bin
	I0819 11:33:07.658761   18211 out.go:352] Setting JSON to false
	I0819 11:33:07.675858   18211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7354,"bootTime":1724085033,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0819 11:33:07.675952   18211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0819 11:33:07.679875   18211 out.go:177] * [functional-944000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0819 11:33:07.686892   18211 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 11:33:07.686930   18211 notify.go:220] Checking for updates...
	I0819 11:33:07.693968   18211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	I0819 11:33:07.696950   18211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0819 11:33:07.699859   18211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:33:07.702898   18211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	I0819 11:33:07.705964   18211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:33:07.709145   18211 config.go:182] Loaded profile config "functional-944000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0819 11:33:07.709404   18211 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 11:33:07.713915   18211 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0819 11:33:07.720839   18211 start.go:297] selected driver: qemu2
	I0819 11:33:07.720851   18211 start.go:901] validating driver "qemu2" against &{Name:functional-944000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-944000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:33:07.720915   18211 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:33:07.726009   18211 out.go:201] 
	W0819 11:33:07.729784   18211 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 11:33:07.733886   18211 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "45.805291ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.8945ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "48.029958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.62025ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.727647s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-944000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image rm kicbase/echo-server:functional-944000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-944000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 image save --daemon kicbase/echo-server:functional-944000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-944000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012214458s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-944000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-944000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-944000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-944000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-716000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-716000 --output=json --user=testUser: (3.149429584s)
--- PASS: TestJSONOutput/stop/Command (3.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-103000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-103000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.039875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd614707-fbc8-4690-9199-9c743893e29d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-103000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1ee3ea0-6830-4da9-92b5-ff9f21046394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"971b77fe-37e7-4102-b2c6-d9b51d604c4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig"}}
	{"specversion":"1.0","id":"1dee3b5b-f675-4a60-a674-ff34e7d03840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"819630a5-163d-4cff-a200-38df71ba7976","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bcb5b191-de5a-4838-89ab-cd58c892a0ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube"}}
	{"specversion":"1.0","id":"1d43b1fb-6241-4e8a-ac26-6395dfeff3fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8c7eab11-13ab-4979-9748-aad6c16a9329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-103000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-103000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-441000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.617917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-441000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-17178/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-17178/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-441000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-441000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.621917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-441000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-441000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.747888584s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.791667208s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-441000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-441000: (3.6932375s)
--- PASS: TestNoKubernetes/serial/Stop (3.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-441000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-441000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.607ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-441000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-441000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-604000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-374000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-374000 --alsologtostderr -v=3: (1.897281625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-374000 -n old-k8s-version-374000: exit status 7 (50.559084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-374000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-113000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-113000 --alsologtostderr -v=3: (1.850926625s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (54.146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-113000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-475000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-475000 --alsologtostderr -v=3: (3.603784167s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-475000 -n embed-certs-475000: exit status 7 (55.802125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-475000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-954000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-954000 --alsologtostderr -v=3: (3.881728417s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-954000 -n default-k8s-diff-port-954000: exit status 7 (47.677792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-954000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-761000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-761000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-761000 --alsologtostderr -v=3: (3.945936583s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-761000 -n newest-cni-761000: exit status 7 (57.453167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-761000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port84646871/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724092352337261000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port84646871/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724092352337261000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port84646871/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724092352337261000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port84646871/001/test-1724092352337261000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.644792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.146292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.973667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.2735ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.392667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.420625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.023125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo umount -f /mount-9p": exit status 83 (47.418791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port84646871/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1547791791/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.074083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.898125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.763709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.580709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.602541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.692208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.592959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "sudo umount -f /mount-9p": exit status 83 (47.655709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-944000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1547791791/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (14.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2754554444/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2754554444/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2754554444/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (80.959334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (84.223584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (85.899333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (87.753375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (83.211083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (83.33775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (85.752833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-944000 ssh "findmnt -T" /mount1: exit status 83 (87.754917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-944000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-944000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2754554444/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2754554444/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-944000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2754554444/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (14.39s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-773000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-773000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-773000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-773000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773000"

                                                
                                                
----------------------- debugLogs end: cilium-773000 [took: 2.293003167s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-773000
--- SKIP: TestNetworkPlugins/group/cilium (2.40s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-931000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-931000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard